This paper attempts to resolve the complexity issues in implementing Fair Queueing inexpensively. In retrospect, it actually wasn't made that clear in "Analysis and Simulation of a Fair Queuing Algorithm" that FQ had such issues, so this paper is a nice continuation. Stoica et. al. propose that one can perform some decentralization and push certain complexities out to the edges of networks which in turn will yield better overall performance while still being fair. (This seems to be a design pattern in systems ... not sure exactly which one it would be though).
The authors call their system core-stateless fair queueing because state is only maintained on the edges of network "islands", where the internal routers just use an augmented FIFO policy to do routing. The augmentation of the FIFO algorithm investigates packet tags added by the edge routers which can help determine which packets should be dropped in order to achieve fairness.
Of the entire paper, I enjoyed Section 4 (Why Are Fair Allocations Important?) the most. As many of my other posts have mentioned, the efficacy of many of these algorithms have in some way relied on cooperation! As the authors go on to discuss, "it seems unsound to base the Internet's congestion control paradigm on the assumption of universal cooperation". YET, they go on to show how, at least for the allocation approach, the algorithms provide a strong incentive for the users of the system to provide some form of responsiveness to congestion control! This is a cool result to me ... of course, this was for drop-intolerant applications. To provide incentives for drop-tolerant flows it may also be necessary to use some sort of punishment scheme.
No comments:
Post a Comment