TCP RTT Analysis
VIT University, Chennai | Winter Semester 2025–2026
Analysis of Round Trip Time (RTT) Under Varying Traffic Conditions Using Wireshark & hping3
1. Introduction
Network performance in real-world systems is rarely measured in isolation. Among the many metrics that define the health of a TCP/IP network, Round Trip Time (RTT) stands out as one of the most telling indicators of end-to-end communication quality. RTT captures the time a packet takes to travel from a source to a destination and back — and its variation across different traffic conditions exposes the dynamics of congestion, buffering, and protocol behavior in ways that raw bandwidth figures cannot.
This Digital Assessment (DA-3) focuses exclusively on RTT as the network parameter under study, leveraging Wireshark's powerful packet dissection and I/O graphing capabilities to observe how RTT behaves under normal, low, medium, and high traffic loads. Traffic was generated programmatically using hping3 on Ubuntu Linux, ensuring controlled and reproducible packet injection rather than browser-based simulation.
2. Objectives
- To capture and analyze Round Trip Time (RTT) variations across four distinct traffic conditions — normal, low, medium, and high — using Wireshark on Ubuntu Linux.
- To understand how increasing traffic load and congestion influence TCP RTT values, retransmission rates, and acknowledgment behavior.
- To visualize RTT trends through Wireshark's built-in graphing tools and derive meaningful inferences about network performance under stress.
- To identify anomalies in RTT patterns such as sudden spikes, jitter, and bufferbloat effects caused by traffic injection.
- To document findings, propose optimization recommendations, and establish a methodology applicable to real-world network diagnostics.
3. Reference
- SharkFest Sessions and TCP analysis tutorials were used as the primary inspiration for this work, providing foundational knowledge on packet capture and network analysis using Wireshark.
- YouTube Video:
https://youtu.be/Tz6IfyfodKo?si=TrkgeYumh8Ix5cN3
This video served as the starting point for understanding RTT analysis through practical demonstrations and real-time packet capture. - RFC 6298 – Computing TCP's Retransmission Timer:
https://datatracker.ietf.org/doc/html/rfc6298
This reference provided insights into TCP retransmission mechanisms and timing behavior. - Wireshark Official Documentation – TCP Stream Graphs:
https://www.wireshark.org/docs/wsug_html_chunked/ChStatTCPStreamGraphs.html
This documentation was used for understanding graph generation and interpretation. - GeeksforGeeks – Round Trip Time (RTT):
https://www.geeksforgeeks.org/round-trip-time-rtt/
This source provided conceptual understanding of RTT and its role in network performance.
4. Description
This blog documents DA-3 of the Computer Networks course at VIT University, focusing on the analysis of Round Trip Time (RTT) as a network parameter using Wireshark. Traffic was generated across four conditions — normal, low, medium, and high — using hping3 on Ubuntu Linux, and RTT behavior was studied through 20+ Wireshark graphs. Inferences, findings, and recommendations are presented along with commands, architecture, and references.
5. Architecture of Work
The architecture represents the end-to-end workflow of this project. Traffic is first generated using hping3, which acts as the traffic generation tool injecting packets into the network. These packets transit through the network layer and are captured in real time by Wireshark, which monitors the network interface. The captured packets are then stored as a PCAP file on disk for offline analysis. The PCAP file is then loaded into Scapy, a Python-based packet analysis library, which processes and extracts RTT-related metrics from the captured data. Finally, the extracted metrics are visualized as graphs, forming the output of the analysis that captures RTT behavior across normal, low, medium, and high traffic conditions.
6. Procedure
6.1 Setup
Wireshark was used to capture network traffic. Scapy was used to analyze PCAP files. hping3 was used to generate controlled traffic under different conditions.
6.2 Normal Traffic
Normal traffic baseline was captured passively using Wireshark without injecting additional packets, observing natural background TCP/TLS traffic on the interface. This served as the reference RTT baseline for comparison.
Wireshark Capture:
This capture represents normal traffic conditions. The packet flow is stable with no significant retransmissions or duplicate acknowledgments, indicating minimal congestion and efficient communication.
6.3 Low Traffic
Command used:
-S = SYN flag, -p 443 = destination port 443, -i u10000 = interval 10000 microseconds (100 packets/sec), -d 256 = data size 256 bytes, baseline low load condition.
Low packet rate traffic was generated using hping3. The network load was minimal, resulting in stable communication with very low congestion.
Wireshark Capture:
RTT observed: approximately 8–15 ms. Stable communication with minimal congestion. No retransmissions detected.
The packet capture under low traffic conditions shows stable communication with minimal packet exchange. No significant retransmissions or congestion-related events were observed, indicating efficient network performance. The traffic remains smooth with low packet rate, resulting in low delay and minimal network overhead.
6.4 Medium Traffic
Command used:
-S = SYN flag, -p 443 = destination port 443, -i u5000 = interval 5000 microseconds (200 packets/sec), -d 512 = data size 512 bytes, 2× faster than low traffic.
Moderate traffic was generated using hping3 by reducing the packet interval. This increased the packet rate and introduced slight network load.
Wireshark Capture:
Traffic was generated using hping3 with controlled packet intervals as per the command shown above and verified using Wireshark capture. Under medium traffic conditions, an increase in packet rate leads to moderate network load. Occasional retransmissions and variations in packet timing can be observed, indicating the beginning of congestion effects.
6.5 Heavy Traffic
Command used:
-S = SYN flag, -p 443 = destination port 443, -i u1000 = interval 1000 microseconds (1000 packets/sec), -c 1000 = 1000 packets total, 5× faster than medium traffic.
High packet rate traffic was generated using hping3. The network load was significantly increased, resulting in unstable communication with severe congestion and connection resets.
Wireshark Capture:
The packet capture under high traffic conditions shows a repeated pattern of SYN → SYN,ACK → RST cycles, with RST packets highlighted in red in Wireshark. This indicates the target host actively rejecting connections due to the high volume of incoming SYN packets. The maximum RTT of 1011 ms — over 1 second — is approximately 80× higher than the minimum RTT, clearly demonstrating the impact of heavy congestion on round trip time. Significant RTT variance and connection instability were observed throughout this phase.
7. Inferences
Graph 7.1: RTT vs Time (Normal Traffic)
The RTT vs Time graph for normal traffic reveals an interesting pattern despite no background traffic being injected. RTT starts high at around 375 ms, drops sharply to approximately 65 ms, then gradually climbs back up to a peak of 865 ms around the 52-second mark before dropping back to near 0 ms. The high average RTT of 195.85 ms and jitter of 104.17 ms during normal conditions suggests that the baseline network itself had some inherent latency variability, possibly due to wireless interference or natural internet routing fluctuations to 1.1.1.1. The sharp peak at 52 seconds may indicate a momentary network congestion event unrelated to hping3 traffic. This baseline establishes a reference point against which all injected-traffic conditions are compared. The variability observed here is inherent to the wireless medium and not an artifact of deliberate traffic injection.
Graph 7.2: RTT vs Throughput (Normal Traffic)
The RTT vs Throughput scatter plot shows that most data points cluster near very low throughput values (0–0.005 Mbps) with RTT values ranging widely from near 0 ms to 465 ms. Two outlier points are visible — one at approximately 0.11 Mbps with 865 ms RTT and another at 0.14 Mbps with 375 ms RTT. This indicates that during normal traffic, throughput remained consistently low, and the occasional high RTT values were not caused by high throughput but rather by network-level delays. No clear positive or negative correlation between RTT and throughput is visible under normal conditions. The cluster pattern confirms that the baseline network is lightly loaded, and the outlier events are episodic rather than systematic. This graph reinforces that RTT under normal conditions is governed by routing and wireless factors rather than throughput-induced congestion.
Graph 7.3: RTT vs Packet Rate (Normal Traffic)
The RTT vs Packet Rate scatter plot shows data points distributed across packet rates of 2–38 pps with RTT values ranging from near 0 ms to 865 ms. The highest RTT of 865 ms occurs at the highest packet rate of approximately 38 pps, suggesting that even under normal conditions, bursts of higher packet rates momentarily elevated RTT. Most low packet rate points (2–6 pps) show wide RTT variance between 10 ms and 465 ms, indicating that packet rate alone does not fully explain RTT behavior during normal traffic — other factors such as routing and wireless medium contention also contribute. The scattered distribution across both low and moderate packet rates confirms multi-causal RTT variability. No strict linear relationship between packet rate and RTT is observable, which is a characteristic feature of background wireless traffic.
Graph 7.4: RTT vs Time (Low Traffic)
The RTT vs Time graph for low traffic demonstrates the most stable and well-behaved RTT pattern across all traffic conditions. From 0 to 75 seconds, RTT values remain consistently low between 3 ms and 10 ms, showing minimal variation and confirming that the network was operating well within its capacity at 100 packets/sec. Two brief spike regions are visible — one around the 25–40 second mark reaching 40 ms and another cluster after the 80-second mark peaking at 45 ms — both of which quickly recover back to baseline within seconds. These isolated spikes are likely caused by brief wireless medium contention or delayed ACK responses rather than genuine congestion. The low average RTT of 15.65 ms and jitter of 10.92 ms confirm that low traffic injection with -i u10000 created minimal network stress, making this the cleanest and most stable traffic condition observed in this blog.
Graph 7.5: RTT vs Throughput (Low Traffic)
The RTT vs Throughput scatter plot for low traffic shows a very clear and tight clustering pattern. The vast majority of data points are densely clustered at very low throughput values between 0 and 0.03 Mbps, with RTT values ranging from near 0 ms to 45 ms. One notable outlier exists at approximately 0.6 Mbps throughput with an RTT of around 9 ms, indicating a brief high-throughput burst where the network handled data very efficiently with low latency. The dense left-side cluster confirms that low traffic injection produces very low throughput as expected from SYN-only packets with minimal payload. The absence of any high RTT values at high throughput confirms that under low traffic conditions there is no congestion-induced latency and the network is operating efficiently throughout. This graph serves as the cleanest reference scatter plot in the entire DA, representing ideal network behavior with no visible congestion signature.
Graph 7.6: RTT vs Packet Rate (Low Traffic)
The RTT vs Packet Rate scatter plot for low traffic reveals two distinct clusters. A small background cluster exists at very low packet rates of 3–15 pps with RTT values between 6 ms and 42 ms, representing residual background TCP traffic captured alongside the hping3 traffic. The dominant cluster appears at high packet rates of 100–120 pps, directly corresponding to the hping3 injected traffic at -i u10000 (100 packets/sec). Within this dominant cluster, RTT values range tightly between 3 ms and 45 ms with most points concentrated between 6 ms and 10 ms, confirming stable and consistent packet processing at low load. Compared to medium traffic where the same 100–120 pps cluster showed wider RTT spread, the low traffic cluster is denser and more tightly packed, indicating better network stability at this injection rate. This two-cluster structure is a useful fingerprint distinguishing injected traffic from background traffic in low-load captures.
Graph 7.7: RTT vs Time (Medium Traffic)
The RTT vs Time graph for medium traffic shows a clear two-phase behavior that distinctly separates it from low traffic. During the first 85 seconds, RTT remains relatively controlled, oscillating between 8 ms and 55 ms with occasional small clusters. However after the 85-second mark, RTT escalates sharply and aggressively, reaching peaks of 105 ms, 225 ms, and 250 ms within a short time window. This late-stage aggressive rise is a direct result of sustained medium traffic injection causing progressive queue buildup at the network level. Compared to low traffic which had an average RTT of 15.65 ms, medium traffic shows a 3.4× higher average RTT of 53.71 ms, clearly demonstrating the measurable impact of increasing packet injection rate from 100 pps to 200 pps on round trip latency. The two-phase pattern is a classic indicator of queue saturation onset — the network absorbs traffic smoothly until a threshold is crossed, after which latency escalates rapidly.
Graph 7.8: RTT vs Throughput (Medium Traffic)
The RTT vs Throughput scatter plot for medium traffic shows data points distributed across a throughput range of 0–0.016 Mbps. The highest RTT values of 225 ms and 252 ms appear at very low throughput values near 0.0005 Mbps and 0.005 Mbps respectively, confirming that elevated RTT during medium traffic is caused by queuing delay and congestion rather than high data throughput. At moderate throughput values of 0.008–0.016 Mbps, RTT drops significantly to 8–45 ms, indicating efficient packet processing during non-congested windows. The 105 ms point at 0.010 Mbps represents a transitional congestion event. Compared to low traffic where all points clustered tightly under 0.03 Mbps, medium traffic shows a slightly wider throughput spread, reflecting the increased packet injection rate generating more data flow across the network interface. The inverse relationship between throughput and RTT in this scatter plot is a hallmark of queue-induced rather than bandwidth-induced congestion.
Graph 7.9: RTT vs Packet Rate (Medium Traffic)
The RTT vs Packet Rate scatter plot for medium traffic reveals a very distinct and meaningful pattern with two separate clusters. The left cluster at low packet rates of 2–20 pps shows widely spread RTT values between 8 ms and 252 ms, representing the background TCP sessions that experienced significant RTT inflation due to network congestion caused by the hping3 injection. The right cluster at high packet rates of 145–160 pps directly corresponds to the hping3 injected SYN traffic at 200 pps, with RTT values ranging from 8 ms to 53 ms. The fact that the high RTT outliers of 225 ms and 252 ms appear in the low packet rate cluster rather than the high packet rate cluster suggests that background TCP sessions suffered more from the induced congestion than the hping3 traffic itself, which is a key finding unique to medium traffic conditions. This differential impact on background vs. injected traffic reveals an asymmetric congestion response in the network stack, where established connections bear a disproportionate latency burden from newly injected SYN traffic.
Graph 7.10: RTT vs Time (High Traffic)
The RTT vs Time graph for high traffic is the most dramatic and revealing graph in this entire DA. Starting from near 200 ms at the 3-second mark, RTT rises steeply and continuously, reaching a stable plateau of approximately 1500–1800 ms by the 10-second mark and maintaining that elevated level for the remainder of the capture. Two extreme spikes reaching 2500 ms are visible at around the 23-second and 30-second marks, indicating momentary complete network saturation. The average RTT of 1435.89 ms is approximately 92× higher than low traffic average of 15.65 ms and 27× higher than medium traffic average of 53.71 ms, making the RTT escalation under high traffic conditions clearly non-linear. The 3493 samples confirm dense and continuous packet capture throughout the flood, and the low jitter of 33.65 ms despite extremely high RTT values indicates that once the network reached saturation, RTT stabilized at a consistently high level rather than fluctuating wildly. This plateau behavior is a classic signature of bufferbloat — where large buffers keep packets from being dropped but cause severe and persistent latency.
Graph 7.11: RTT vs Throughput (High Traffic)
The RTT vs Throughput scatter plot for high traffic reveals a striking and unique pattern not seen in any other traffic condition. The dominant cluster of points is concentrated at very high throughput values of 0.75–0.80 Mbps, with RTT values ranging from 100 ms to 2500 ms. This is a complete reversal from low and medium traffic where high RTT appeared at low throughput. Under high traffic flood conditions, the network interface is being saturated with packets producing high throughput, and simultaneously RTT is extremely elevated due to severe queue buildup and buffer saturation. Two extreme outliers at 2500 ms RTT appear at the highest throughput values near 0.80 Mbps, confirming that peak throughput directly corresponds to peak RTT under flood conditions. A small isolated cluster at 0.25 Mbps with RTT around 1600–1900 ms represents the initial ramp-up phase before full saturation was reached. The reversal of the throughput-RTT relationship compared to earlier conditions is one of the most significant findings in this blog, demonstrating that flood traffic operates in a fundamentally different congestion regime.
Graph 7.12: RTT vs Packet Rate (High Traffic)
The RTT vs Packet Rate scatter plot for high traffic is the most data-rich graph in this DA with points spanning packet rates from near 0 to 800 pps. A clear progressive pattern is visible — at low packet rates of 100–150 pps, RTT values range from 100 ms to 1950 ms showing high variance during the network ramp-up phase. As packet rate increases to 400–500 pps, RTT begins stabilizing between 500 ms and 1800 ms. At the highest packet rates of 550–800 pps, RTT values converge tightly into a dense band between 1250 ms and 1900 ms with two extreme outliers reaching 2500 ms. This convergence at high packet rates confirms that once the network reached full saturation, individual packet latency stabilized at a consistently high level. Compared to medium traffic where the packet rate cluster topped at 160 pps, high traffic reaches 800 pps — a 5× increase — which directly explains the 27× RTT increase observed between the two conditions. The non-linear relationship between packet rate multiplier (5×) and RTT multiplier (27×) conclusively proves that network congestion collapse is superlinear beyond a critical threshold.
Graph 7.13: RTT vs Time (High Traffic in Mobile Hotspot Environment)
The RTT vs Time graph for high traffic in a mobile hotspot environment is by far the most dramatic and conclusive graph in this entire DA. Starting from near 0 ms at the 8-second mark, RTT rises steeply and continuously in a near-perfect exponential curve, reaching a peak plateau of approximately 2600–2700 ms by the 18-second mark. This steep ramp-up perfectly illustrates buffer filling behavior — as the flood of packets overwhelms the network queue, each subsequent packet waits longer and longer, causing RTT to climb continuously. After the peak at 18–20 seconds, RTT shows a brief oscillation between 1000 ms and 2700 ms indicating the network attempting recovery while still under load. After the flood stops around the 28-second mark, RTT drops sharply in a long declining slope back toward near 0 ms by the 55-second mark, demonstrating TCP's congestion recovery mechanism. The average RTT of 1962.86 ms is approximately 125× higher than low traffic and 37× higher than medium traffic, confirming extreme non-linear RTT scaling under flood conditions. The 10906 samples confirm the sheer volume of packets captured during this phase.
Graph 7.14: RTT vs Throughput (High Traffic in Mobile Hotspot Environment)
The RTT vs Throughput scatter plot for high traffic in a mobile hotspot environment reveals the most striking throughput-RTT relationship observed across all conditions. The dominant cluster is concentrated at very high throughput values of 8–8.5 Mbps, with RTT values spanning the entire range from near 0 ms to 3600 ms. This wide vertical spread at peak throughput confirms that the network interface was being saturated at maximum capacity while simultaneously experiencing extreme RTT variance — some packets still getting through quickly while others were severely delayed in the buffer queue. A secondary cluster exists at near 0 Mbps throughput with RTT values of 2000–3400 ms, representing the early ramp-up phase before full throughput saturation was achieved. Two isolated points at 3.5 Mbps with RTT around 2000–3200 ms represent transitional mid-ramp states. The maximum throughput of 8.5 Mbps compared to medium traffic's 0.016 Mbps represents a 530× increase, clearly showing the scale of flood traffic generated.
Graph 7.15: RTT vs Packet Rate (High Traffic in Mobile Hotspot Environment)
The RTT vs Packet Rate scatter plot for high traffic in a mobile hotspot environment is the richest and most data-dense graph in this DA, with packet rates reaching up to 2300 pps — approximately 23× higher than medium traffic's peak of 100–120 pps. At very low packet rates near 0 pps, RTT values cluster between 2700–3400 ms, representing early capture windows before the flood fully kicked in. As packet rate increases from 300 to 1000 pps, RTT values range between 2000–3400 ms showing sustained high latency during the flood ramp-up. At peak packet rates of 1900–2300 pps, RTT values show the widest spread from near 0 ms to 3600 ms, indicating the most chaotic network behavior at maximum injection rate. The extreme outliers reaching 3600 ms at the highest packet rates represent the worst-case RTT experienced during complete network saturation. This graph conclusively proves that packet rate and RTT have a complex non-linear relationship under flood conditions — simply doubling the packet rate does not double RTT but rather pushes it into an entirely different magnitude of latency.
Graph 7.16: RTT vs Time (Duplicate ACK)
The RTT vs Time graph for duplicate acknowledgement traffic shows the most extreme and volatile RTT behavior observed across this entire DA. Starting at approximately 400 ms at the 9-second mark, RTT rises dramatically to a peak of 15000 ms at around the 22-second mark before collapsing sharply back down to near 800 ms at the 28-second mark and gradually settling toward 0 ms by the 54-second mark. This sharp inverted V-shape pattern is a classic signature of duplicate ACK induced retransmission — when the receiver detects missing segments it sends duplicate ACKs, causing the sender to retransmit and the RTT measurement to accumulate the full retransmission delay. The jitter of 2813.91 ms is the highest recorded across all traffic conditions in this DA, nearly 257× higher than low traffic jitter of 10.92 ms, confirming that duplicate ACK events produce the most unpredictable and unstable RTT behavior of any condition tested. Unlike heavy flood conditions where RTT plateaus at a consistently high level, duplicate ACK conditions produce a sharp spike and recovery — making them uniquely destructive to time-sensitive application performance.
Graph 7.17: RTT vs Throughput (Duplicate ACK)
The RTT vs Throughput scatter plot for duplicate ACK traffic reveals a direct and clear positive correlation between throughput and RTT — a pattern unique to this condition and not observed in any other traffic scenario. The extreme RTT outlier of 15400 ms appears at the highest throughput value of approximately 18 Mbps, confirming that the retransmission burst causing duplicate ACKs simultaneously produced a high throughput spike as the sender flooded the network with retransmitted segments. The remaining data points cluster at very low throughput values of 0–0.5 Mbps with RTT values between 100 ms and 800 ms, representing normal transmission windows between retransmission events. This clear separation between normal transmission points and the single extreme retransmission event makes this graph one of the most visually compelling in the entire blog. The positive correlation seen here is the opposite of what is typically expected, and serves as a diagnostic indicator that the RTT spike was driven specifically by a retransmission burst rather than passive queue buildup.
Graph 7.18: RTT vs Packet Rate (Duplicate ACK)
The RTT vs Packet Rate scatter plot for duplicate ACK traffic shows a highly sparse but extremely informative distribution. The peak RTT of 15400 ms occurs at a moderate packet rate of approximately 2000 pps, indicating that the retransmission burst that triggered maximum RTT was not at the absolute highest packet rate but rather at a specific congestion threshold. At the highest packet rate of approximately 5400 pps, RTT drops to around 400 ms, suggesting that at maximum injection rate the network was processing packets rapidly with lower per-packet delay despite the high load. The isolated point at 400 pps with 800 ms RTT represents an intermediate retransmission event. The extremely wide RTT range from near 0 ms to 15400 ms across a relatively narrow packet rate range of 0–5400 pps confirms that duplicate ACK induced RTT is driven by retransmission timing rather than raw packet rate, making it fundamentally different in nature from the congestion-driven RTT observed in heavy traffic conditions. This graph is the most instructive in demonstrating that RTT pathology has multiple distinct causes, each requiring different diagnostic and remediation approaches.
Graph 7.19: Average RTT Across All Traffic Conditions
The Average RTT comparison bar chart provides the clearest and most comprehensive summary of RTT behavior across all six conditions tested in this blog. The chart reveals a striking non-linear progression — Low traffic at 15.65 ms and Medium at 53.71 ms are barely visible compared to the three high-stress conditions. Heavy WiFi at 1435.89 ms, Heavy Hotspot at 1962.86 ms, and Duplicate ACK at 1560.44 ms are all approximately 90–125× higher than Low traffic, confirming that RTT degradation under stress is not gradual but catastrophic. The Normal traffic bar at 195.85 ms appearing higher than Medium traffic at 53.71 ms is a notable observation — this is because Normal traffic captured background WiFi activity with inherent wireless latency, while Medium traffic was a controlled and filtered capture to 1.1.1.1 only. The Heavy Hotspot bar being the tallest at 1962.86 ms compared to Heavy WiFi at 1435.89 ms confirms that mobile hotspot networks are more vulnerable to RTT inflation under identical traffic load conditions than WiFi networks. This single chart serves as the executive summary of the entire DA, making the non-linear nature of RTT degradation immediately apparent to any observer.
Graph 7.20: Jitter Across All Traffic Conditions
The Jitter comparison bar chart is arguably the most visually dramatic graph in this entire blog. Five of the six conditions — Normal, Low, Medium, Heavy WiFi, and Heavy Hotspot — show jitter values so low they are barely visible on the chart, all under 105 ms. The Duplicate ACK bar at 2813.91 ms completely dominates the chart, standing approximately 27× taller than the next highest jitter value. This single visualization conclusively proves that duplicate acknowledgement events are the most destructive condition for network stability — not heavy traffic flood, not mobile hotspot stress, but specifically the retransmission cycles triggered by duplicate ACKs. The near-zero jitter of Heavy Hotspot at 13.92 ms despite its extremely high average RTT of 1962.86 ms is a particularly interesting finding — it shows that under mobile hotspot flood conditions RTT was consistently high rather than variable, while duplicate ACK conditions produced both high RTT and extreme unpredictability simultaneously. Network operators should treat high jitter as a separate and more urgent alarm condition than high average RTT, as this chart demonstrates that the two metrics can diverge dramatically depending on the cause of network stress.
8. New Findings
- RTT degradation under flood traffic conditions is catastrophically non-linear — Low traffic produced an average RTT of 15.65 ms while Heavy WiFi produced 1435.89 ms, representing a 92× increase for only a 10× increase in packet rate, confirming that network performance collapse accelerates beyond a critical congestion threshold.
- Duplicate ACK events produce the highest jitter of any condition tested at 2813.91 ms — approximately 257× higher than Low traffic jitter of 10.92 ms — making retransmission-induced instability far more destructive to network quality than raw traffic volume alone.
- Mobile hotspot networks are significantly more susceptible to RTT inflation under identical flood conditions compared to WiFi — Heavy Hotspot produced 1962.86 ms average RTT versus Heavy WiFi's 1435.89 ms under the same hping3 command, a 37% increase purely due to network medium difference.
- Despite extremely high average RTT of 1962.86 ms, Heavy Hotspot traffic produced the lowest jitter of all conditions at 13.92 ms — indicating that mobile networks under saturation maintain consistently high but stable latency rather than variable latency, suggesting a fundamentally different queuing mechanism compared to WiFi.
- Background TCP sessions suffered greater RTT inflation from injected traffic than the hping3 SYN packets themselves — high RTT outliers of 225–252 ms appeared in the low packet rate cluster rather than the high packet rate cluster in Medium traffic graphs, confirming that existing connections are more vulnerable to induced congestion than freshly injected SYN packets.
9. Recommendations
- Network monitoring dashboards should display jitter as a primary metric alongside average RTT — jitter of 2813.91 ms during duplicate ACK conditions would have been completely missed if only average RTT was tracked, leading to a false impression of acceptable network health.
- Mobile hotspot environments should not be used for latency-sensitive applications under heavy load — the 37% higher RTT compared to WiFi under identical traffic conditions makes them unsuitable for real-time communication such as VoIP or video conferencing during network stress.
- Packet capture and analysis tools like Scapy should be used alongside hping3 for RTT measurement in high packet loss environments — hping3 alone significantly underestimates true RTT by excluding dropped packets from its average calculation, as demonstrated by the 1962.86 ms Scapy measurement versus hping3's reported 30.4 ms for the same capture.
- Duplicate ACK suppression mechanisms such as TCP SACK (Selective Acknowledgement) should be enabled on network interfaces to reduce jitter caused by retransmission cycles — this DA demonstrated that duplicate ACKs produce jitter 27× higher than even the heaviest flood traffic condition tested.
- For production network testing, traffic should always be generated using controlled rate commands like
-i u10000or-i u1000rather than--floodmode — controlled injection maintains measurable and reproducible RTT values while flood mode introduces packet loss that corrupts RTT statistics and makes cross-condition comparison unreliable.
10. Use of AI
AI tools — specifically Claude (Anthropic) and ChatGPT (OpenAI) — were used at multiple stages of this assignment:
1. Blog Structure and Documentation — Claude was used to structure the entire blog content including introduction, objectives, procedure, inferences, and conclusion sections, ensuring professional quality and completeness aligned with the DA guidelines.
2. Code Assistance — ChatGPT provided the hping3 command for generating duplicate acknowledgement traffic, and Claude helped explain the Scapy RTT calculation logic based on TCP sequence and acknowledgement number matching.
3. Graph Interpretation — Claude assisted in interpreting RTT patterns across all six traffic conditions including the counterintuitive finding that Heavy Hotspot produced lower jitter than Heavy WiFi despite higher average RTT, and the significance of the 2813.91 ms jitter spike during duplicate ACK events.
4. Conceptual Clarity — AI tools were used to understand the relationship between RTT, jitter, throughput, and packet rate under varying congestion levels, and to explain phenomena such as TCP port reuse warnings and bufferbloat behavior observed in Wireshark.
5. Data Interpretation — Claude helped identify and explain the discrepancy between hping3's reported RTT of 30.4 ms and Scapy's measured RTT of 1962.86 ms for the same capture, revealing that hping3 excludes dropped packets from its RTT calculation leading to significant underreporting under high packet loss conditions.
The use of AI significantly improved the depth and quality of analysis and documentation while the core experimental work — packet capture using Wireshark, traffic generation using hping3, Scapy script execution, and graph generation — was performed independently.
11. Conclusion
This experiment successfully demonstrated the impact of varying network traffic conditions on Round Trip Time using a combination of Wireshark packet capture, hping3 traffic generation, and Scapy-based programmatic analysis on Ubuntu Linux. Across six distinct conditions — Normal, Low, Medium, Heavy WiFi, Heavy Hotspot, and Duplicate ACK — a clear and consistent pattern emerged: as traffic intensity increases, RTT increases non-linearly and unpredictably, with jitter proving to be a far more sensitive indicator of network stress than average RTT alone.
The study produced several significant findings. Heavy flood traffic caused a 92× RTT increase compared to Low traffic, confirming that network performance collapse beyond a critical threshold is catastrophic rather than gradual. The comparison between Heavy WiFi and Heavy Hotspot environments under identical traffic conditions revealed that mobile networks are 37% more susceptible to RTT inflation, highlighting the importance of network medium in latency analysis. Most strikingly, Duplicate ACK events produced jitter of 2813.91 ms — 27× higher than even the heaviest flood traffic — establishing retransmission-induced instability as the single most destructive condition for network quality observed in this DA.
The combination of Wireshark for live capture and expert info analysis, hping3 for controlled and reproducible traffic injection, and Scapy for programmatic RTT extraction across 20 graphs provided a comprehensive and multi-dimensional view of RTT behavior that would not have been possible through manual observation alone. This work reinforces the importance of measuring jitter alongside average RTT, using pcap-based analysis tools for accurate measurement under packet loss conditions, and testing across multiple network environments for statistically reliable and practically meaningful conclusions.
12. YouTube Video Link
https://www.youtube.com/watch?v=9tgw4o6n2AI
13. GitHub Link
All PCAP files, output graphs and Python script used in this experiment are available in the GitHub repository linked below.
https://github.com/saiabhishek-D/TCP-RTT-ANALYSIS-DA3/tree/main
14. Acknowledgement
I take this opportunity to express my heartfelt gratitude to everyone who supported and guided me through the completion of this Digital Assessment.
- My Parents — for their endless motivation, sacrifice, and support that keeps me going through every challenge in my academic journey.
- VIT University, Chennai — for offering a rich academic environment, well-equipped laboratories, and the resources necessary to carry out network experiments of this nature.
- Dr.T.SUBBULAKSHMI — for crafting an assignment that goes beyond theoretical knowledge and pushes students toward real-world experimentation. The structured DA guidelines were instrumental in shaping the direction and depth of this work.
- SCOPE Department, VIT Chennai — for maintaining a high standard of education in the Computer Networks curriculum and providing the necessary lab support for B.Tech CSE students during the Winter Semester 2024–2025.
- Wireshark founder Gerald Comb — ACM Software System Award winner (2018) for providing a fantastic software for traffic analysis
- My friends and classmates — for the late-night debugging sessions, shared resources, technical inputs, and the motivation that comes from learning together.
- The open-source community — behind Wireshark, hping3, and Scapy, whose tools made it possible to capture, inject, and analyze network traffic with precision and depth that commercial tools often cannot match.
15. Peer Comments
1. The comparison between Heavy WiFi and Heavy Hotspot under identical hping3 commands is a standout finding. The 37% higher RTT on mobile hotspot clearly demonstrates how network medium affects latency under stress.
2. Well structured blog with clear procedure documentation. The hping3 commands are well explained with flag descriptions and the progression from 100 pps to 1000 pps across traffic conditions is logical and reproducible.
3. The decision to include Duplicate ACK traffic as a separate condition was creative and insightful. The jitter of 2813.91 ms being 27 times higher than heavy flood traffic is a surprising and well-supported finding.
4. The summary bar charts for Average RTT and Jitter across all six conditions are the most effective visualizations in the blog. They immediately communicate the non-linear nature of RTT degradation without needing any explanation.
The comparison between Heavy WiFi and Heavy Hotspot under identical hping3 commands is a standout finding. The 37% higher RTT on mobile hotspot clearly demonstrates how network medium affects latency under stress.
ReplyDeleteWell structured blog with clear procedure documentation. The hping3 commands are well explained with flag descriptions and the progression from 100 pps to 1000 pps across traffic conditions is logical and reproducible.
ReplyDeleteThe decision to include Duplicate ACK traffic as a separate condition was creative and insightful. The jitter of 2813.91 ms being 27 times higher than heavy flood traffic is a surprising and well-supported finding.
ReplyDeleteThe summary bar charts for Average RTT and Jitter across all six conditions are the most effective visualizations in the blog. They immediately communicate the non-linear nature of RTT degradation without needing any explanation.
ReplyDelete