Friday, August 22, 2008

New Algorithm Offers Hope for Old Routers

A team of computer scientists has proposed a new algorithm that makes routers operate more efficiently by automatically limiting the number of network route or link-state updates they receive.
The algorithm could be important in large heterogeneous corporate networks where the oldest, slowest routers make all the others wait while they absorb updates and recalculate their path tables. The Approximate Link State (XL) algorithm suppresses the updates so only those routers that are directly affected receive them, says Professor Stephan Savage, who along with three other computer scientists at the University of California at San Diego developed the algorithm. Savage presented a paper this week about XL at the Association for Computing Machinery's conference of its Special Interest Group on Data Communications.
Without XL, routers typically flood the network with route updates, with every router receiving every update. In very large networks the sheer number of routers and inevitable link-state changes would episodically grind routers to a halt.
"Updates may only be relevant to very localized areas," Savage says. Using a map analogy to illustrate the point, he says that a driver on the East Coast doesn't care if Interstate 5 is flooded out in Portland, Ore.. "But metaphorically, we tell everyone that information in networking."
To deal with that problem, large networks are manually engineered to create areas -- conceptually isolated groups of routers -- that limit the number of routers any flood reaches. Routers still receive floods, but only from the routers within their areas.
Savage says XL can eliminate manual configuration of areas. Instead, each router automatically figures out what other routers it needs should pass along updates so all destinations can still be reached and loops don't occur that effectively black-hole packets.
The XL algorithm selectively withholds some updates, creating a trade-off. If a new link becomes available after a failure, the algorithm decides whether forwarding the information beyond a router's immediate neighbors will improve enough paths and by a great enough percentage to warrant passing it along.
If not, the router suppresses the update by not forwarding it. The result is updates are sent only to the immediate areas where topology has changed, making the distribution less disruptive.
The Trade-off
That benefit is balanced against the fact that employing the algorithm means each router has less precise information about what the state of the network actually is.
Each router with XL would maintain data about its neighbors' shortest path tree -- how its neighbor views the network -- and uses that to determine whether to forward path updates. That would increase the amount of data routers keep, but Savage says his team thinks that additional data is very small.
In big networks, overall performance is limited by the slowest router. "That's the router you're waiting for so the new network configuration can converge," Savage says.
Because buying cycles for routers may vary within very large networks, older slower routers can have a big impact, he says. "Scalability may be limited by stuff you bought 10 years ago that you can't afford to replace yet," he says.
The algorithm is compatible with Intermediate System-to-Intermediate System and Open Shortest Path First link-state routing, he says, which means the software upgrade containing the algorithm could be deployed incrementally and would interoperate with existing router protocols. The goal in these networks would be to optimize paths based on a given parameter such as latency or bandwidth, he says.
Getting XL into practical use would require router makers to incorporate it in their software, Savage says. "It would need to be embraced by vendors. If Cisco picked it up it would have impact," he says.
He has already briefed Cisco -- which helped fund his research through the Center for Network Systems.

No comments: