OC3MON
This document is for information pertaining to the OC3 monitor project
used to measure and analyze Internet flows.
John Bass (jbass@mcnc.org) at NCSU is
currently coordinating the OC3 monitor project for the NC Giganet.
Useful Links
NC Giganet Flows Analysis
The main vBNS OC3Mon/Coral page
the Coral project
details about the OC3Mon implementation
links to published papers from vBNS engineering
How to grab meaningful data from NC Giganet sites
From the initial web page for the "Flows Analysis" (
http://www.vitalnet.org/nc_giganet_flows/), first choose a network (typically nc_giganet).
On the next page, choose the actual OC3MON you are interested in getting
information from. There are currently (5/14/98) two OC3MON's on each
of the points in the NC Giganet mesh (NCSU, UNC, MCNC, Duke) for vBNS.
Also, choose the date that you are interested in. For multiple-day
time-series data, choose the last day you are interested in.
This brings you to what we refer to as Page 1, which allows you
to choose a time interval, and a type of data to examine.
Single-interval Data
- The data reported in one OC3MON report may be viewed in raw ascii
by choosing the Single interval data in raw ascii choice on Page 1.
Alternatively, by choosing the Single interval formatted data option
from Page 1, the same raw data can be sorted by flows, packets, bytes, or
flow durations.
Packet Length Histogram
Byte Volume Graphs
- Byte volume graphs can be produced by first choosing the
Time-series plots option on Page 1, and the overall->kbits/s
option on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Packet Volume Graphs
- Packet volume graphs can be produced by first choosing the
Time-series plots option on Page 1, and then the
overall->packets/sec option on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Flow Volume Graphs
- Flow volume graphs can be produced by first choosing the
Time-series plots option on Page 1, and then the
overall->known flows option on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Average Packet Size Graphs
- Average packet size graphs can be produced by first choosing the
Time-series plots option on Page 1, and then the
overall->ave packet size option on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Aggregate Traffic Graphs (by protocol or application)
- Aggregate traffic can be graphed by first choosing the
Time-seriew plots option on Page 1, and then the
raw packet/byte/flow counts option on Page 2.
Choose the IP protocol or application (IP protocol/port pair) in
the appropriate fields on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Fraction of Traffic Graphs (by protocol or application)
- Fraction of traffic graphs can be graphed by first choosing the
Time-seriew plots option on Page 1, and then the
proportion of total traffic option on Page 2.
Choose the IP protocol or application (IP protocol/port pair) in
the appropriate fields on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Average Traffic per Flow Graphs (by protocol or application)
- Average traffic per flow can be graphed by first choosing the
Time-seriew plots option on Page 1, and then the
raw packet/byte/flow counts option along with the
per flow avgs only option on Page 2.
Choose the IP protocol or application (IP protocol/port pair) in
the appropriate fields on Page 2.
If you specify a number of days to Go back on Page 2, then
the data is graphed over that many days from the date and interval
previously specified.
Traffic Composition Data (by protocol or application)
Monitoring Collection Loss
- Information about the collection loss rate (measured by flow
allocation errors in the OC3MON) can be produced by first choosing the
Time-series plots option on Page 1, and the
overall->collection loss option on Page 2.
Data can be graphed in multiples of days backwards from the date and
interval specified at or before Page 1.
What we have access to...
We have an account on loopy.ncren.net (152.1.213.81),
the machine hosting the www.vitalnet.org website for the
NC Giganet Flows Analysis. Don Smith knows the username and password.
We can update the currently installed PERL scripts for the Web interface.
We can also write new scripts for the Web interface. However, since
loopy is not the actual machine that executes the scripts, and
the organization of the vitalnet site is a bit of a hack, we must work
with John Bass (jbass@mcnc.org)
to activate the new scripts.
We hope to soon have an OC3MON (or variant) monitoring the UNC
link to the Internet 1.
What it all looks like...
loopy.ncren.net is running SunOS 5.5.1 (System V Unix) and an
Apache web server which hosts several virtual domains. Also, loopy
runs a cron job to gather information from all eight NC Giganet OC3MONs
every five minutes.
This is a list of the important directories:
/dept/www/
- loopy's virtual websites are all under this directory.
/dept/flow_stats/bin/
- This directory contains the PERL code for the web interface.
/nc_giganet_data/nc_giganet/
- All giganet data is stored under this directory.
- There is a directory for every OC3MON in the NC Giganet.
/nc_giganet_data/nc_giganet/{ unc1 | unc2 | ncsu1 | etc }/
- Each OC3MON directory contains a directory for every day that
data was collected. May 12, 1998, for example, is stored under the
directory named 980512.
/nc_giganet_data/nc_giganet/{oc3mon}/{date}/
Digging throught the PERL code...
First, a disclaimer: The current PERL scripts are basically a hack
that evolved from MCI's vBNS project ... they provide the services
above, but are not designed in a modular or very extensible fashion.
There are two files which provide all the services: fwsummarize.pl,
which handles the collection and display of all single-interval data,
and reports.pl, which handles all the collection of intervals
data, including time-series plots.
For future work on these scripts, look to reports.pl to see the
routines that crunch data for the graphing utilities. In particular,
the following routines are especially meaningful:
sub init_and_parse
- Information is passed to the cgi script as "POST" form data, which
appears on the URL line itself. This routine parses all the important
information out of the URL line and sets global variables.
sub process_options
- This is the main routine to direct traffic throughout the script.
Basically, it just calls get_file_list, crunch_numbers,
and print_report.
sub get_file_list
- Based on the nature of the requested report, this routine gathers
the filenames of the raw data files that need to be opened and processed.
In the event of a multiple-day request, this routine calls
get_mult_days. Note the obvious hacks in get_mult_days and
probe_dir.
sub crunch_numbers
- This routine opens each of the files gathered in get_file_list
and reads pertinent information into global variables.
sub print_report
- All the "real" work gets done here, as the graph is produced.
print_report calls routines to generate the .gif file of the
graph. These routines keep looking at global variables that tell
what options were selected on the web page, and they use the data
stored from crunch_numbers to create the graphs on the fly.
Other DiRT documents
Author: Jan Borgersen
Last updated: May 14, 1998