[Nrg-l] NRG Next Wednesday

Georgios Smaragdakis gsmaragd at cs.bu.edu
Fri Oct 1 19:51:52 EDT 2004

As there is no Colloquium next week, NRG will take
place on Wednesday. I will present the paper "Sizing 
Router Buffers" that appears in SIGCOMM 2004 and lead
the discussion of possible extensions.

Date: Wednesday October 6
Time: 3:00pm
Place: Grad Lounge

paper info:


"Sizing Router Buffers"
by Guido Appenzeller, Isaac Keslassy, Nick McKeown 
Stanford University

All Internet routers contain buffers to hold packets during times of
congestion. Today, the size of the buffers is determined by the dynamics
of TCP's congestion control algorithm. In particular, the goal is to make
sure that when a link is congested, it is busy 100% of the time; which is
equivalent to making sure its buffer never goes empty. A widely used
rule-of-thumb states that each link needs a buffer of size B = RTT X C,
where RTT is the average round-trip time of a flow passing across the
link, and C is the data rate of the link. For example, a 10Gb/s router
linecard needs approximately 250ms X 10Gb/s = 2.5Gbits of buffers; and the
amount of buffering grows linearly with the line-rate. Such large buffers
are challenging for router manufacturers, who must use large, slow,
off-chip DRAMs. And queueing delays can be long, have high variance, and
may destabilize the congestion control algorithms.

In this paper we argue that the rule-of-thumb B = RTT X C is now outdated
and incorrect for backbone routers. This is because of the large number of
flows (TCP connections) multiplexed together on a single backbone
link. Using theory, simulation and experiments on a network of real
routers, we show that a link with n flows requires no more than B =(RTT X
C) / sqrt{n}, for long-lived or short-lived TCP flows. The consequences on
router design are enormous: A 2.5Gb/s link carrying 10,000 flows could
reduce its buffers by 99% with negligible difference in throughput; and a
10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering,
which can easily be implemented using fast, on-chip SRAM.

More information about the Nrg-l mailing list