COMP 249 Assignment 3 Report

Adrian Ilie

I implemented the receiver-side jitter buffer management using a modified queue monitoring scheme.

Because the packets usually contain 3 or less frames, it made little sense to do the queue monitoring on a per frame basis. It would only have led to a slightly wider spread of the stages where frames were dropped. So, I only monitored the queue at the time a packet arrived, computed then the number of frames a hypothetical thread would have played in the meantime and did the frame dropping afterwards, if necessary.

In this scheme, a frame is "late" if when the packet that contains it arrives in an empty queue and "early" if the packet arrives in a non-empty queue. The RTP timestamp to current time mapping is pushed back on every "late" frame. Because frames arrive in "bursts" (packets), only the first frame in a packet can ever be "late", and all the other frames are "early".

The queue I used was 10 frames long, and the thresholds were:

int thresholds[10] = {0,15,12,10,7,5,3,1,0,0};

For this configuration I got the following results:

Delay

Frames Dropped

Time Spent by Early Frames

10

476

33.26601

20

480

33.41153

30

482

33.42115

40

473

33.33454

50

473

33.10572

60

485

33.52618

70

493

34.27992

80

474

33.72213

90

459

32.98061

100

481

34.67387

150

551

51.82971

200

231

37.06997

250

124

28.02156

Table 1: Results for the requested parameters.

Represented graphically, these results look as follows:

Figure 1: Dropped Frames.

Figure 2: Average Time spent by Early Frames.

For delays between 10 and 100 ms, both parameters don't vary too much. There is a relatively high number of dropped frames. I ran the tests for a few values higher than 100 ms, and the number of dropped frames decreased dramatically, meaning that the buffer became almost empty at all times.

There are several factors that lead to these results.

First, because the scheme pushes back the mapping between the RTP timestamp and the current time at each "late" frame encountered, all the other frames received in the same packet become "early". This accounts for both parameters monitored in this report.

Also, because the server transmits a relatively uniform number of frames (usually 3) per packet, there is not much variation in the queue's length between the times when successive packets are received. Thus, the thresholds for a particular queue size are reached relatively often, which gives the relatively high number of dropped frames.

A 10 frames long queue gives a 260 ms long buffer, if filled completely, and this provides good jitter hiding for delays of less than half the interval at which the server sends data.

 

The segment of the source code that performs the counting is:

frameCount = specificHeader.numFrames; /* how many frames? */

framesPlayed = mseconds / msPerFrame; /* how many frames should have been played in this interval */

framesEarly = queueSize + frameCount - framesPlayed - 1; /* early frames, adjusted */

framesPlayed = (framesPlayed > queueSize) ? queueSize : framesPlayed; /* didn't play more than we had stored */

queueSize -= framesPlayed; /* subtract frames played */

queueSize += frameCount; /* add frames received */

if (framesEarly>0)

{

totalTime += (framesEarly - 1) * (framesEarly - 2) * msPerFrame / 2; /* time spent by early frames */

totalFrames += framesEarly; /* for the mean */

}

for (i=1; i<10; i++) /* monitor queue here */

{

if (queueSize == i)

{

counts[i]++;

if (counts[i] >= thresholds[i]) /* exceeded threshold */

{

queueSize--; /* drop a frame */

counts[i] = 0; /* reset counter */

framesDropped++;

}

}

}

if (queueSize>=10) /* drop extra frames too */

{

framesDropped += queueSize - 10;

queueSize = 10;

}