The first part of this chapter presented our approach for introducing realistic
*network-level parameters* in our traffic generation methodology.
In particular, we considered how to measure three basic network parameters
that have a major impact on the throughput of a TCP connection:
round-trip time, receiver window size, and loss rate.
As in our analysis of source-level behavior, we focused on the efficient
analysis of segment headers for extracting these network parameters, and evaluated
the accuracy of our chosen measurement methods using testbed experiments.

Our discussion on measuring round-trip time considered the classic SYN estimator, and proposed a novel technique based on computing one-side transit times (OSTTs). Our technique has two main advantages. First, it is applicable to connections observed both on the edges and on the core of the network. In either case, it provides us with a way to measure the distance, in terms of network delay, between the monitoring point and the end hosts taking part in each connection. Second, OSTT-based estimation provides a number of samples proportional to the number of data segments on a TCP connection, unlike the single sample that can be obtained using the SYN estimator. This provides a better way to understand the inherent variability in round-trip times. It also served us to study the impact of delayed acknowledgments on path round-trip time estimation from segment headers. We clearly showed that delayed acknowledgments substantially inflate estimates of round-trip time that rely on non-robust statistics like averages and maxima. For this reason, we favor the use of minima or medians to estimate path round-trip time, which were proved to be highly accurate in our testbed experiments.

We also studied the empirical distributions of round-trip times in our collection of five traces. We can highlight several observations. The edge traces from UNC and Leipzig showed between 20% and 35% of connections with very short round-trip times below 20 milliseconds. In contrast, the backbone trace from Abilene showed less than 1% of connections with these small round-trip times. Our analysis of the total number of bytes carried in connections with a given round-trip time revealed that Leipzig-II had a far larger fraction of bytes (10%) carried in connections with round-trip times above 500 milliseconds. The distributions of round-trip times did not only differ substantially on their range, but also on their shapes, even among those collected on the same site. For example, the UNC 1 PM trace showed only 15% of connections with round-trip times above 100 milliseconds, while this percentage became 25% and 38% for UNC 7:30 PM and 1 AM respectively.

The second parameter we considered is the maximum size of the receiver window, which, in combination with the round-trip time, puts a hard limit on the maximum throughput of a TCP connection. This parameter is straight-forward to measure, since each TCP segment contains a field with the size of the receiver window at the time of its sending. Taking the maximum of the observed receiver windows provides an accurate way of measuring the largest receiver window supported by an endpoint, even for connections that grow their limit some time after the connection is opened. We used this technique to study the distribution of maximum receiver window sizes in our traces, and found a large fraction of connections with a small maximum. Between 45% and 65% of the connections had maximum receiver window sizes below 20 KB, which is well below the 64 KB limit.

The last network parameter that we studied was the segment loss rate. Loss has a
substantial impact on TCP connections. First, losses force the endpoints to retransmit
segments to maintain a reliable communication. Second, TCP endpoints use losses as the
signal of congestion, and react to them by lowering their sending rate.
For these two reasons, even a small number of losses can have a dramatic effect on a
TCP connection.
Measuring loss rates purely from segment headers must necessarily be based on the
same mechanisms used by TCP endpoint to detect losses: retransmissions and duplicate
acknowledgments. We proposed a technique to measure the loss rate of data segments
using these signals, where differentiating between losses before the monitoring point,
detected using duplicate acknowledgments, and losses after the monitoring point, detected
using retransmissions. Our evaluation using testbed experiments showed that our technique
is reasonably accurate. The experiments also illustrate the impact of lost acknowledgments,
which increase data segment loss rates, and variability introduced by simulating losses using
*dummynet* 's dropping mechanism. We also studied the loss rates in our traces, and
found that between 92.5% and 96.2% of the TCP connections experienced no losses. However,
connections with one or more losses accounted for 46% (Leipzig-II) to 78% (UNC 1 AM) of the total
bytes in traces, and connections with loss rates above 1% (*i.e.*, moderately high) accounted for
8% (Abilene-I) to 34% (UNC 1 AM) of the total bytes.

The second part of this chapter described our approach for comparing real
and synthetic traffic using several *network-level metrics*.
The goal of such a comparison is to evaluate how closely synthetic traffic
generated on a closed-loop manner can reproduce the aggregate characteristics
of real traffic.
This type of comparison concerns itself with the *extrinsic*
characteristics of the generated traffic, which were not a direct input to
the traffic generators.
On the contrary, evaluating how well source-level properties
and network-level parameters are preserved by our traffic generation method and its implementation
focuses on *intrinsic* characteristic of the generated traffic, which
are the input to the traffic generation system.
We first discussed how to study the time series of packet and byte throughputs, using plots of
time series at a coarse scale, tens of seconds. This broad view was specially useful to
identify major trends and features in the traffic.
We used this approach to study the composition of our traces, finding that sequential
connections are mostly responsible for the features of the time series, being the aggregate
throughput for concurrent connections generally smooth. We further differentiate between traffic
from connections for which we observed every packet between TCP connection establishment and
termination, uncovering substantial boundary effects in the UNC traces and to some extent in
the Abilene-I trace.
We also showed that the fraction of the total throughput from unidirectional connections is
generally negligible. The only exception is Abilene-I, where routing asymmetries explain the
finding that 1/4 of total Cleveland-to-Indianapolis bytes were carried
in connections whose packets appear in only one direction of the trace.

The second way in which we proposed to examine throughput was to construct the marginal distributions of the time series at a fine-scale (10 milliseconds). While marginals ignore dependency structure, their interpretation in networking terms is intuitive. Plots of the body of the marginal distribution provide an overview of the range of fine-scale throughputs in a trace, while plots of the tail of the marginal distribution make the highest (fine-scale) throughputs stand out. The analysis of our traces showed that Poisson arrivals cannot be used to model neither packet or byte throughputs. The bodies of the marginal distributions from our traces are between 2 and 3 times more variable that the ones from Poisson arrivals with the same mean. We also showed that the marginal distributions from our traces have statistically significant departures from normality, which are most prominent on the tails. This was demonstrated using two methods, Q-Q plots with simulation envelopes and the Kolmogorov-Smirnov test of normality. Both methods were applied to scales of aggregation between 10 milliseconds and 10 seconds. While the distributions became closer to normality as scale increased, only a few of them were statistically consistent with the normal distribution at the 10 second scale. For this reason, our analysis of marginal distribution will rely on CDFs of the bodies and CCDFs of the tails, rather than making assumptions about the underlying statistical distribution.

Our third type of analysis of throughput focused on the long-range dependence of traffic. We employ the wavelet analysis for this purpose, which has been shown to be robust and accurate in the literature. This method provides both an overview of the way in which variability changes with scale using wavelet spectra plots, and a state-of-the-art estimator of Hurst parameter with confidence intervals. Our discussion illustrated how clearly wavelet spectra and Hurst parameter estimates differentiate between the short-range dependence in Poisson arrivals and the long-range dependence in our traces. Our traces show remarkably high Hurst parameter estimates, well above 0.9 for both packet and byte throughput.

Finally, the chapter introduced the plot of the time series of active connections.
This type of analysis is essential to validate the realism of traffic generation for certain
experiments where per-connection state is important. Our analysis considered two
definitions of active connections: a connection was considered active between the arrivals
of its first and last segments, or between the arrivals of its first and last segments
that carried application data, *i.e.*, not control segments. We demonstrated that these
two definitions have a dramatic impact on the number of active connections. We will favor
the latter definition (data active connections) for our evaluation in Chapter 6,
since the focus of our modeling is the source-level behavior in terms of useful data exchanges.
Our discussion of active connections also considered the effect of trace boundaries, revealing
a large fraction of active connections from partially-captured connections.

Doctoral Dissertation: *Generation and Validation of Empirically-Derived TCP Application Workloads*

© 2006 Félix Hernández-Campos