To attack these problems, we propose an experimental approach, which is non-adaptive and non-conservative. Instead of negotiating the acceptable window size over time based on the feedback information from the network, our approach allows the source to transmit at their maximum rate whenever they have data to send, unbridled by conservative control algorithm. The source can even transmit a whole file in a large burst, until it is notified of a packet loss, when it starts a new burst starting from the lost packet. The idea here is to feed the network with enough data and let the network digest as much as it can. But to prevent large bursts from occupying gateway buffers and blocking other connections, the packet streams from traffic sources are prioritized in decreasing order, and the gateways discriminate against low priority packets similarly to . This scheme distributes bottleneck bandwidth among contending sources in a fair manner and gives priority to short, interactive-type traffic over long bursts.
In our experiments, we compared the performance of our scheme(``Blitz'') with that of TCP Reno. In a simplified and generalized network model similar to what was analyzed in , we measured the network powers. We observed that for NSFNET size network(1.6ms of internodal distance) and AURORA testbed size network(1.8ms of internodal distance), for both relatively slow link speed(150Mb/s) and fast link(1Gb/s), Blitz far outperforms TCP. For instance, in a 150Mb/s network with 16ms round-trip time, the network with Blitz yields up to nearly 10 times larger network power than TCP. In 1Gb/s network, it increases to 17 times. This result is only for the data size of 100KB, but more extensive simulations have been done.
The lessons we have learned are first, the aggressiveness is the key to the efficient utilization of bandwidth in large bandwidth-delay product networks. Second, our approach scales better with further increasing bandwidth. Third, the performance improvement using the aggressive approach increases with larger bandwidth-delay product and decreases with larger average data size carried in the network. When data size grows slowly while bandwidth is increasing fast, we consider the results are sound.
Finally, we conclude that the model we have had for congestion control must be changed to efficiently exploit the orders-of-magnitude larger bandwidth of the future. Traffic sources should be allowed to be aggressive rather than conservative, and network gateways should be able to enforce fairness on the abusive users.