Monday, February 20, 2023

Congestion Control- principles prevention policies, congestion control in virtual circuit subnet and datagram subnets

 Congestion control is an essential mechanism for ensuring the efficient operation of computer networks, regardless of whether they use virtual circuit subnet or datagram subnet architectures. \

Congestion Control in Virtual Circuit Subnets:

In a virtual circuit subnet, each host is assigned a circuit that provides a dedicated path between the hosts. Congestion control in a virtual circuit subnet involves two main mechanisms: admission control and traffic management.

Admission control is used to limit the number of active circuits in the network to prevent congestion from occurring. When a new circuit is requested, the network must first determine whether there are sufficient resources available to support the circuit. If the resources are not available, the circuit is rejected. This is typically done by using a call setup procedure, which reserves the necessary resources along the path of the circuit.

Traffic management involves regulating the flow of data on each circuit to prevent congestion. This can be done by using techniques such as leaky bucket or token bucket algorithms to limit the rate at which packets are transmitted. The leaky bucket algorithm works by allowing packets to be transmitted at a fixed rate, while the token bucket algorithm works by allocating a fixed number of tokens to each circuit at regular intervals. When a packet is transmitted, it consumes one token. If the tokens are exhausted, no further packets can be transmitted until the next token allocation.

Congestion Control in Datagram Subnets:

In a datagram subnet, each packet is treated independently and routed based on the destination address. Congestion control in a datagram subnet involves three main mechanisms: hop-by-hop flow control, end-to-end flow control, and random early detection (RED).

Hop-by-hop flow control involves limiting the rate at which packets are forwarded by each router in the network. This can be done by using techniques such as packet dropping or marking to signal to the hosts to reduce their transmission rate. If a router receives packets at a rate that exceeds its forwarding capacity, it can either drop packets randomly or mark them with a specific code that indicates congestion. The source host can then adjust its transmission rate based on the number of dropped or marked packets.

End-to-end flow control involves regulating the flow of data at the source and destination hosts. This can be done by using techniques such as window-based flow control or congestion window-based flow control to adjust the rate at which packets are transmitted. In window-based flow control, the source host sends a fixed number of packets and waits for an acknowledgement before sending the next set of packets. The window size is dynamically adjusted based on the rate of packet loss or congestion. In congestion window-based flow control, the window size is adjusted based on the number of packets that have been transmitted but not yet acknowledged.

Random early detection (RED) is a technique used to prevent congestion by dropping packets before the network becomes congested. When the buffer at a router becomes full, the router drops packets randomly to signal to the hosts to reduce their transmission rate. The probability of a packet being dropped is based on the queue length at the router. As the queue length increases, the probability of packet dropping also increases, thereby preventing the queue from becoming congested.

Congestion Control:

Congestion control is a technique used to manage network traffic to avoid congestion, which can occur when the number of packets transmitted exceeds the capacity of the network to carry them. The goal of congestion control is to ensure that the network operates efficiently, without any loss of data or degradation in the quality of service. The key principles of congestion control include:

  1. Detection: The network must be able to detect when congestion is occurring. This can be done by monitoring network traffic and analyzing performance metrics, such as packet loss rates and latency.

  2. Prevention: Once congestion is detected, the network must take steps to prevent it from getting worse. This can be done by reducing the rate at which packets are transmitted or by limiting the amount of data that can be transmitted by each host.

  3. Feedback: The network must provide feedback to the hosts to let them know about the congestion and the actions being taken to prevent it. This can be done by sending messages to the hosts or by adjusting the flow of packets in the network.

Congestion Prevention Policies:

Congestion prevention policies are a set of rules or guidelines that are used to prevent congestion in the network. Some common congestion prevention policies include:

    1. Traffic Shaping: Traffic shaping is a technique that regulates the rate of data transmission to prevent congestion. It limits the amount of data that can be sent over the network at a given time, prioritizing more important traffic and delaying less important traffic.

    2. Quality of Service (QoS): QoS is a technique that prioritizes certain types of traffic over others. It ensures that high-priority traffic, such as voice or video, is given priority over less important traffic, such as email or file transfers.

    3. Packet Filtering: Packet filtering is a technique that filters out unwanted traffic, such as spam or malware, to prevent congestion. It blocks incoming traffic that is not relevant or harmful to the network, reducing the overall amount of traffic.

    4. Load Balancing: Load balancing is a technique that distributes network traffic evenly across multiple servers or network paths to prevent congestion. It ensures that no single server or path is overloaded with traffic, leading to better performance and preventing congestion.

    5. Capacity Planning: Capacity planning is a technique that involves predicting future network traffic and planning accordingly to prevent congestion. It involves analyzing current network traffic patterns, predicting future traffic, and upgrading network capacity as necessary to ensure that the network can handle the expected traffic.

    These congestion prevention policies can help maintain network performance and prevent congestion from occurring, ensuring that data transfer rates remain high and that the network can handle the traffic it receives.

No comments:

Software scope

 In software engineering, the software scope refers to the boundaries and limitations of a software project. It defines what the software wi...

Popular Posts