RemNote Community
Community

Computer network - Performance and Advanced Concepts

Understand bandwidth and delay fundamentals, congestion control and resilience mechanisms, and the basics of teletraffic engineering.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

What does bandwidth measured in bits per second represent?
1 of 16

Summary

Network Performance Bandwidth Bandwidth is a fundamental concept in understanding network capacity and performance. At its core, bandwidth refers to the maximum data rate that a communication path can carry, measured in bits per second (bps). Think of bandwidth like the capacity of a water pipe—a wider pipe can carry more water, just as higher bandwidth can carry more data. However, it's important to distinguish between two related concepts: Maximum bandwidth is the theoretical limit of what a link can carry, determined by the physical characteristics of the transmission medium (like fiber optic cable or copper wire). Consumed bandwidth (also called throughput) is the actual amount of data being transmitted at any given moment. You might have a 100 Mbps connection, but if you're only downloading a small file, your consumed bandwidth might only be 5 Mbps. Networks use several mechanisms to control how much bandwidth different users or applications can consume: Bandwidth shaping and throttling deliberately limit the data rate to prevent any single user or application from monopolizing network resources Bandwidth caps set maximum limits per user or connection Bandwidth allocation schemes distribute available capacity fairly among competing flows These control mechanisms are essential for maintaining quality of service and preventing one heavy user from degrading performance for everyone else. Network Delay Network delay, also called latency, is the total time it takes for a bit of data to travel from one endpoint to another. This is a critical performance metric because even small delays can significantly impact applications like video calls or online gaming. Rather than thinking of delay as a single value, it's useful to break it down into four components that combine to create the total delay: Processing Delay occurs at routers along the path. When a packet arrives at a router, the router must examine the packet header to determine where to send it next. This examination takes time. Processing delay is typically very small—on the order of milliseconds or less—but it accumulates across multiple hops. Queuing Delay happens when multiple packets arrive at a router faster than they can be sent out. The packets must wait in a queue before transmission. This delay varies dramatically depending on network congestion—during light traffic, queuing delay might be negligible, but during peak usage, packets might wait in queues for significant periods. Queuing delay is often the dominant source of latency in congested networks. Transmission Delay is the time required to push all the bits of a packet onto the physical transmission link. This depends on both the packet size and the link's bandwidth. For example, a 1,000-bit packet on a 1 Mbps link requires 1 millisecond of transmission time. Faster links have shorter transmission delays. Propagation Delay is the time for the signal itself to travel through the transmission medium—whether that's copper wire, fiber optic cable, or radio waves. This depends only on the distance traveled and the speed at which signals travel through the medium (roughly the speed of light for most media). For instance, a signal traveling 3,000 kilometers through fiber optic cable (where signals travel at about 2/3 the speed of light) experiences about 15 milliseconds of propagation delay. Total network delay is simply the sum of all four components: $$\text{Total Delay} = \text{Processing} + \text{Queuing} + \text{Transmission} + \text{Propagation}$$ Understanding these components is crucial because they're affected differently by network conditions—while you can't change propagation delay (it's determined by physics and geography), queuing and processing delays are heavily influenced by congestion and can be optimized. Performance Metrics Beyond bandwidth and delay, several other metrics characterize network performance: Throughput measures the amount of successful data transferred per unit time, typically expressed in bits per second. Unlike bandwidth (which is the capacity), throughput is the actual achieved rate. Your internet connection might have a bandwidth of 100 Mbps, but various factors might result in a throughput of only 60 Mbps. Jitter quantifies how variable the delays are. Imagine receiving packets that arrive with delays of 10ms, 11ms, 9ms, 10ms—this has low jitter because the times are consistent. Compare that to delays of 5ms, 20ms, 8ms, 30ms—this has high jitter. High jitter degrades quality for time-sensitive applications like voice calls, where you expect relatively consistent delays. Bit Error Rate (BER) indicates the proportion of transmitted bits that are received incorrectly due to signal degradation or interference. A BER of $10^{-6}$ means that on average, one out of every million bits is corrupted. Even small bit error rates require retransmission mechanisms to ensure reliability. Network Congestion Congestion occurs when a link or router node receives more data traffic than it can handle. When this happens, the first consequence is increased queuing delay—packets must wait longer as the queue grows. If congestion becomes severe, routers may run out of buffer space and begin dropping packets (discarding them entirely). Packet loss creates a vicious cycle. When packets are lost, the sending application (following its transmission protocol) typically retransmits those packets. But if the network is already congested, adding retransmitted packets makes the congestion worse. This can lead to congestive collapse—a state where so many packets are being retransmitted that network throughput actually decreases despite more packets being sent. Networks employ several strategies to prevent or mitigate congestion: Exponential backoff is a technique where senders that experience packet loss wait progressively longer before retransmitting, reducing the rate at which packets are sent back into the network. This gives the network time to clear. TCP window reduction (part of TCP congestion control) makes senders reduce their transmission rate when they detect congestion, providing immediate relief to overwhelmed links. Fair queuing is a scheduling discipline that ensures all users or applications get a fair share of bandwidth by servicing packets from different flows in a round-robin manner, preventing one heavy user from monopolizing a link. Quality of Service (QoS) priority schemes allow critical traffic (like emergency services communication or medical telemetry) to bypass congested queues, ensuring that essential services remain available even when the network is under stress. Network Resilience Network resilience is the ability of a communication system to maintain acceptable service levels despite faults, failures, component degradation, attacks, or natural disasters. This is a crucial property for critical infrastructure. Resilient networks incorporate three key mechanisms: Redundancy means having backup components or paths. If a single link fails, alternate paths can carry traffic. If one router fails, backup routers can take over its functions. This redundancy adds cost but ensures that no single point of failure can bring down the entire network. Alternative routing paths allow traffic to be rerouted around failed components. Networks with mesh topologies (where many nodes have multiple connections to other nodes) provide more routing options than simpler topologies, improving resilience. Rapid fault detection mechanisms enable the network to quickly identify when something has failed and reroute traffic before users notice significant service degradation. The faster a network detects and responds to failures, the more resilient it appears to users. The image above illustrates how multiple parallel paths between endpoints A and B (through various intermediate nodes) provide redundancy. If any single path fails, traffic can continue on the remaining paths. <extrainfo> Advanced Concepts Teletraffic Engineering Teletraffic engineering applies mathematical models (particularly queuing theory) to predict and manage traffic flow in telecommunications networks. Engineers use these models to ensure that networks are designed and operated to provide acceptable quality of service. This involves forecasting traffic patterns, dimensioning network capacity appropriately, and optimizing how traffic flows through the network. While this is a specialized field, the underlying principle is important: networks must be carefully engineered to handle expected traffic loads while maintaining performance. </extrainfo>
Flashcards
What does bandwidth measured in bits per second represent?
Maximum data rate of a communication path
What is the definition of network delay?
Latency for a bit of data to travel from one endpoint to another
What is processing delay in a network?
Time a router spends examining a packet header
What is queuing delay in a network?
Time a packet waits in a router’s output queue before transmission
What is transmission delay in a network?
Time required to push all bits of a packet onto the transmission link
What is propagation delay in a network?
Time for a signal to travel through the transmission medium
What four components make up the total network delay?
Processing delay Queuing delay Transmission delay Propagation delay
What does throughput measure in a network?
Amount of successful data transferred per unit time
What does network jitter quantify?
Variability in packet arrival times
What does the bit error rate indicate?
Proportion of transmitted bits received incorrectly
When does network congestion occur?
When a link or node receives more traffic than it can handle
What are the primary negative outcomes of network congestion?
Increased queuing delay Packet loss Blocking of new connections
How can retransmission mechanisms worsen network congestion?
By potentially leading to congestive collapse
How do Quality of Service (QoS) priority schemes help during congestion?
Allow critical traffic to bypass congested queues
What is the definition of network resilience?
Ability to maintain acceptable service levels despite faults, failures, or attacks
What is the purpose of applying mathematical models in teletraffic engineering?
To predict and manage traffic flow to ensure quality of service

Quiz

What does bandwidth measured in bits per second represent?
1 of 2
Key Concepts
Network Performance Metrics
Bandwidth
Network delay
Throughput
Jitter
Network Management Techniques
Network congestion
Congestion control
Quality of service
Network Reliability and Planning
Network resilience
Teletraffic engineering