Tuesday, September 2, 2008

End-to-End Arguments In System Design

The paper argued that system design should not try and be overly aggressive in providing functionality that may, in the future, be seen as too restricting, unneeded, or even worse, redundant. The big take away message for me seems to be a recurring theme in computer science ... high-level semantics are hard to predict and reason about, but often are the key thing to exploit when you want good performance. In this case, the canonical example is some sort of telephony service, where the high-level semantics of the application PERMIT lost packets because the clients (in this case humans) can effectively handle the lost packets by, for example, asking for something be repeated. Clearly, knowing about this can help optimize how you actually write your network service.

Another aspect of this paper that I wanted to emphasize was the authors statement that the "reliability measures within the data communication system is seen to be an engineering tradeoff based on performance, rather than a requirement for correctness". Typically, I think that we have considered layering our applications on top of lower-level architectures (like TCP) because they provide a level of correctness for us that we don't have to worry about. It is rather interesting to consider that these authors believe the original intention of "reliability" was actually to benefit the performance of the application rather than the correctness of the application ... in my experience, applications get better (best?) performance when they can fine-tune everything to their needs, including, their own "reliability" semantics. 

No comments: