TCP Traffic

Overview

This module gives students experience generating and analyzing TCP flows. Students will use iperf to create a flow and view the sawtooth behavior. A second flow will then be introduced to show how TCP flows share a link.

Setup Time: Varies
Tutorial Time: 20 minutes

Objectives

Upon completing this module you will:

  • Be able to use iperf to generate TCP traffic
  • Have an understanding of how TCP utilizes and shares a link's capacity
  • Be able to adjust the MTU on an interface

Tutorial

A. Slice Creation and Instrumentation

This module assumes you have an active slice with two connected nodes and SSH terminals to both nodes. If you don't, follow the steps in the GENI Setup module, and the Instrumentation module then continue here. It is also assumed that the IP addresses and hostnames are assigned as described in the GENI Setup module (client=10.1.1.1, server=10.1.1.2). If this is not the case, you will need to substitute the correct IP addresses and hostnames below.

B. Video

If you haven't already, watch the video above. It will walk you through the steps of the module.

C. Adjust The MTU

If both your nodes are within the same aggregate, skip this section and continue to section D.

1. On the server SSH terminal, type:

ifconfig

to list the node's network interfaces and the attributes and status of each. Make note of the name of the interface that is assigned the 10.1.1.2 address (i.e. “gre2”).

2. A GRE tunnel is used to connect nodes from different aggregates. When a GRE tunnel is used, it is necessary to adjust the Maximum Transmission Unit (MTU) size for the interfaces. This can be done by issuing the following command:

sudo ifconfig <interface name> mtu 1400

where <interface name> is replaced with the name of the interface noted in the previous step (i.e. gre2). The MTU size for the interface is now compatible with the GRE link between the two nodes.

3. Repeat steps 1 and 2 on the client SSH terminal; however, be careful as the interface names may be different.

D. iperf in one direction

iperf is a tool for measuring TCP and UDP bandwidth performance. In this section, we establish an iperf flow from the client to the server.

1. If you have not already done so, open the totaltraffic graphs in GENI Desktop on the client and server (see section E of the Instrumentation module for information on opening graphs).

2. In the server SSH terminal, start the iperf server by typing:

iperf –s &

The node can now receive iperf traffic. The ampersand (&) allows the command to run in the background while you use the console to enter more commands.

3. In the client SSH terminal, start the iperf client by typing:

iperf –c server -i 10 ‐t 180 &

This command opens a TCP connection to the iperf server on the server node and begins sending packets. The "-c server" indicates that iperf should run in client mode and it should connect to the node with the host named "server". If you are using different hostnames, replace "server" with the hostname or IP address of your server node. The "-i 10" tells iperf to print updates every 10 seconds and "-t 180" indicates that the client should run for 180 seconds before closing the connection.

4. Watch the network activity graphs in GENI Desktop. You should see the traffic ramp up to a maximum and then oscillate, creating a sawtooth pattern.

E. iperf in the other direction

While iperf is still running, form a second iperf flow from the server node back to the client node. In this backwards scenario, the client node will be running the iperf server and the server node will act as a client.

1. In the client SSH terminal, type:

iperf –s &

2. In the server SSH terminal, type:

iperf –c client -i 10 ‐t 180 &

3. Again, watch the network activity graphs in GENI Desktop. You should see the initial flow pull back as it shares the link bandwidth with the second flow.

4. Once the initial flow stops, you should see the second flow grow to utilize the available bandwidth. The following is an example output from the iperf server.

------------------------------------------------------------                    
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.1.1.2 port 5001 connected with 10.1.1.1 port 52327
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.3 sec 93.4 MBytes 76.1 Mbits/sec

Shutdown

Upon completion of the module please delete your slice's resources as described in the Shutdown module.