Digital Government Fellows

CRA Home | Awards | Events | Government Affairs | Information Resources | Jobs | Committees | People | Publications | What's New

 

[This is the text of an article that was published originally in the September 2001 issue of Computing Research News]

Evolution of Next-Generation Internet Services and Applications
By Kevin C. Almeroth

Kevin Almeroth (almeroth@cs.ucsb.edu) is an Assistant Professor of Computer Science at the University of California, Santa Barbara. Dr. Almeroth received his Ph.D. from the Georgia Institute of Technology. His research interests include computer networks and protocols; large-scale multimedia systems; performance evaluation; and distributed systems. A summary of his research is available at: www.cs.ucsb.edu. The slides used in the above presentation are available at www.cra.org/Activities/fellows/almeroth.pdf


As the second Computer Research Association (CRA) Digital Government Fellow, I was given the opportunity to give a talk at the Government Technology Conference 2001 (GTC) for the Western Region.

Both the fellowship and my talk had the same goal: finding ways to bridge the gap between academia and government's use of the Internet. As technology evolves ever more rapidly, it is a challenge for government agencies to keep up with these changes. Therefore, the goal of my talk was to help those in attendance better understand 1) where the Internet was evolving next, and 2) how the evolution is going to happen.

Given the close ties between my research in one-to-many (multicast) communication and related applications, and my work with the Internet2 initiative, the logical conclusion was to give a talk entitled, "The Evolution of Next-Generation Internet Services and Applications." I expected that the audience would be interested in hearing where the Internet was likely to evolve; what kinds of applications and services were already being deployed; and what kinds of technologies they would soon be expected to deploy.

For an academic, the conference represented a truly unusual environment. While my colleagues and I often interact in a conference setting, this conference was unusual in that the topics were a far cry from hardcore networking. Some of the sessions I looked in on covered topics like record preservation in the digital age, privacy, and launching government on the Web. There were some seemingly traditional sessions like "Designing High Availability Systems," but these had a strange perspective. For example, one of the questions heard in this session was about how important Universal Power Systems (UPSs) were, given the likelihood of rolling blackouts in California. Sessions presented prior to mine were a good reminder to be prepared for a widely diverse audience.

In the first part of my talk, I polled the audience to determine the kinds of jobs and responsibilities that each of them had. About half the audience dealt with and understood the network, but the rest simply saw it as a black box. This second group held positions like "City Manager" and used the Internet as a service. They were less concerned with how the Internet worked, but were certainly interested in the kinds of things the future Internet was capable of doing. Even the half of the audience that was aware of the details beyond the host was still very focused on the short term. These audience members were responsible for keeping their own networks running; hence, they were less interested in the theory behind next-generation Internet services and more interested in what they would have to deploy and how they would manage it.

My talk followed the theme that both applications and the network are in flux. Applications adapt to the kind of communication services provided by the network, and the network seeks to evolve to provide new services to enable an even richer set of applications. My premise was that the Internet no longer evolves very much. For all the evolution, the Internet still only provides best-effort delivery of IP packets. There is no quality of service to speak of, little IPv6 deployment, only a marginal amount of multicast deployment, and no active network-style services to speak of (except maybe firewalls). Therefore, the question I created and sought to answer was why was this the case, and what could be done about it.

The technical part of my talk was to focus on multicast. I showed the audience how multicast trees were dynamically built; gave an overview of tree construction protocols; and then focused on the issues of multicast traffic and group monitoring efforts. My own research has focused on building both real-time diagnostic tools and long-term statistical collection mechanisms. An indirect result of my work is to understand some of the deployment challenges beyond the technical challenges.

Monitoring multicast traffic is somewhat similar to monitoring unicast traffic, but there are differences. The key difference derives from the simple fact that multicast traffic can be destined for multiple receivers. With multicast, this level of abstraction carries additional importance because of the added complexity associated with delivering a packet to multiple receivers. Instead of monitoring connectivity between pairs of users, multicast deals with potentially very large groups of users. And instead of monitoring the links along a single path, multicast deals with links organized into a tree.

Anonymity of group members and use of the User Datagram Protocol (UDP) to carry multicast data make it difficult to monitor multicast groups. For example, the current multicast model is an open service model that supports sessions in which anyone can send data to a multicast group and/or join and receive data from the group. In this model, senders and receivers may not be known to one another. Support for dynamic groups makes multicast management more difficult. In particular, reachability monitoring-- the task of verifying if multicast data from a session source can be received at a session receiver site-- requires additional mechanisms. This is because in the current IP multicast service model there is no implicit group coordination or management. Therefore there can be no implicit way of knowing the group members.

As an example of why the specific characteristics of multicast are more of a challenge than unicast, we consider the case of monitoring reachability. One mechanism for determining who group members are and whether reachability exists between source(s) and receiver(s) is the ping utility. In unicast, ping allows a source/receiver to test bi-directional reachability to a peer receiver/source. In the case of multicast, because of the open service model and because ping requests are made to a group instead of a receiver, the source does not know from whom and from how many group members to expect responses.

This creates a number of problems. First, there is the problem of implosion that can occur if a very large number of group members choose to send a response within a small interval. Second, the responses that are sent may only be from a subset of group members. Receivers who do not have bi-directional connectivity with the source will not be heard, i.e., receivers who do not hear the ping request (in the case of a broken link), or receivers who do not have connectivity in the reverse direction. On the other hand, a multicast version of ping tool that is truly analogous to the unicast ping should return reachability status for all the receivers in the group.

To finish the talk and round out the afternoon, I focused on Internet2 and its role in the development, refinement, and deployment of these kinds of advanced services. The Internet2 engineering working groups are focused on doing in Internet2 what is extremely difficult in the commodity Internet (for reasons of size). Through the academic institutions as well as affiliated government and industrial partners, Internet2 is working to build a small but highly advanced next-generation Internet infrastructure.

While the audience was certainly impressed they were also a bit confused. The part I did not explain very well was how Internet2 integrates into the commodity Internet. Given that Internet2 is already integrated into the commodity Internet; given that participation in Internet2 does not require wholesale replacement of an existing enterprise's infrastructure; and given that advanced services will be deployed incrementally; the usefulness of Internet2 is much easier to understand.

What I have already learned from the CRA Digital Government Fellowship is that a wide gap exists between the kinds of research that academics do and the kinds of roadblocks there are to the deployment of Internet-based technology in the U.S. government. No doubt a better understanding of the importance and complexity of advanced Internet services will benefit all affected.

<< back to Digital Government Fellows home page