Monday, September 22, 2008

More Congestion Avoidance (High-Bandwidth, High-Latency Networks) and Future Internet Design Issues (Real-Time Applications)

I've been absent for the past week (recovering from a concussion) and so for those reading in the blogosphere you will find a rather condensed version of my thoughts (and feelings) of the following papers:
  • Random Early Detection Gateways for Congestion Avoidance
  • Congestion Control for High Bandwidth-Delay Product Networks
  • Fundamental Design Issues for the Future Internet
  • Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism
As the title suggests, these papers address two distinct topics: congestion avoidance/control and future internet design considerations (especially for latency intolerant applications).

The former, congestion avoidance and control, has been discussed to great detail in some previous posts and these authors found yet more issues with existing techniques. Random Early Detection (RED) gateways were found to be effective (if used with cooperating end-nodes) at doing congestion control by doing something rather remarkable: they choose a connection to "throttle down" with the probability equal to the connection's share of the bandwidth! (Note that this still avoids biasing bursty traffic.) I find this to be a nice little result ...

Of course, if Fair Queuing wasn't good enough, Core-Stateless Fair Queuing wasn't good enough, then there is no reason to believe Random Early Detection should be good enough. And it, for the most part, isn't. So, Katabi, Handley, and Rohrs propsed XCP (eXplicit Control Protocol). Briefly, they take ignore backward compatibility and consider a control theoretic design. For one, this means they decouple fairness and efficiency control and provide explicit congestion notifications (versus TCP which has no notifications but instead relies on a timeout which they assume implies a lost packet, i.e. congestion which may be either due to fairness or efficiency). 

The work of Katabi et. al. is especially interesting because it shows how poorly TCP scales with increased bandwidth and latency. These authors used satellite as the canonical example, however, I'm curious if we have actually approached similar limitations with the optic networks of today ... (in other words, how bad is TCP really? how bad is slow-start really?).

The subsequent topic was especially motivated by real-time applications. Such latency intolerant applications were not actually new (they had been doing speech-processing on the Internet since 1980's ... before I was born!), but they had accounted for much of the traffic in networks.

In Shenker's "Fundamental Design Issues ..." he discusses the need to extend the current architecture to include a new "service model". That way, flows that have special needs (i.e. real-time applications) can be treated differently than other latency tolerant flows (i.e. file transfer). He discusses a host of issues, but I found the discussion about how to "choose a service" most interesting. He expands on the issues with implicitly supplied services (the gateway chooses the service) and touches on the biggest problem (in my mind) with explicitly supplied services (the application chooses the service): abuse (applications might find ways to get better network utilization if they lie about their actual needs). Much to my enjoyment he proceeds to discuss "incentives", and how that might quell the issue (which turns into a discussion of pricing, etc).

I'm not sure that we really have done much of anything for real-time applications (at least not in the hardware, clearly, we have made improvements in the software), and so perhaps it is still a valid issue today.

No comments: