COMP 249 Assignment 1 Report

Adrian Ilie

Part 1:

For the TCP client, the plot of cumulative bits read vs. time is as follows:

Graph 1: Bits received by the TCP client.

For the first 5 seconds, the actual and ideal curves are almost identical, because the TCP window can get as large as it needs to. When the UDP client is started at second 5, the actual number of bits received begins to fall behind, as the UDP packets begin taking bandwidth. The congestion control in TCP reduces the window size, thus reducing the number of bits received. After stopping the UDP client, the TCP protocol ensures that the packets left in the server’s send buffer are actually sent, by increasing the window size again. Eventually, the number of bits received catches up with the ideal, trailing it afterwards. This is achieved by adjusting the window size as needed to send bits at the requested rate.

Part 2a:

For the UDP client, the plot of cumulative bits read vs. time is as follows:

Graph 2: Bits received by the UDP client.

For the first 5 seconds, the actual number of bits received trails the ideal number. After the second UDP client is started, the total bandwidth allocated by the operating system to the first client diminishes, and so does the bitrate at which the bits are received by the client. After the second client is stopped, the bitrate goes back up again, but the total number of bits received doesn’t catch up with the ideal, as in the TCP case, because UDP doesn’t keep the packets until it receives confirmation that they have been received, so several packets are lost. So, the difference between the two curves represents the lost packets that never got received.

Part 2b:

For the second part I plotted both the number of packets received and the ideal number of packets vs. time, and the packet loss:

Graph 3: Packets sent received by the UDP client.

Graph 4: Packet loss for the UDP client.

There are a greater number of small packets sent, but both graphs are noisy. Small packets tend to be delivered less reliably. For packets larger than 2100 bits, there’s a dramatic decrease in packet loss. These variations can be attributed to the mismatches between the UDP packet sizes and the underlying protocol’s (IP) packet sizes. As IP transmits datagrams of at least a certain size, small UDP packets are sent inside separate IP packets that are filled with useless data, thus wasting network resources. This leads to a smaller number of received packets on the client side. Once the UDP packets reach a certain size, the percentage of useless data in the IP datagrams is smaller. Also, the number of packet transmitted is smaller for the same throughput, decreasing the packet loss.