The National LambdaRail

Cyberinfrastructure for Tomorrow's Research and Education

David Farber, National LambdaRail and Carnegie Mellon University
Tom West, National LambdaRail


The demands of leading-edge science increasingly require networking capabilities beyond those currently available from even the most advanced research and education networks. As network-enabled collaboration and access to remote resources become central to science and education, researchers often spend significant time and resources securing the specialized networking resources they need to conduct their research. As a result, there is less time and fewer resources available to conduct the research itself.

New technology holds the promise of providing more easily the networking capabilities researchers require. Increasingly, the best option for ensuring the technology and these capabilities are available seems to be for the research and education community to own and manage the underlying network infrastructure. This movement towards a facilities-owned approach is relatively unprecedented in the history of research and education networking, yet holds the promise for unique benefits for research and education. Ownership provides the control and flexibility, as well as the efficiency and effectiveness needed to meet research and education’s uniquely demanding networking requirements.

A new global network infrastructure owned and operated by the research and education community is being developed, deployed, and used. In the United States, a nationwide infrastructure is being built by the National LambdaRail (NLR) organization, in collaboration with scientists and network researchers, with leadership from the academic community, and in partnership with industry and the federal government. Furthermore, NLR both leverages, and provides leverage for, existing and new regional and local efforts to deploy academic-owned network infrastructure.

History of Research Networking

To understand how the most recent movement in research and education (R&E) networking differs from those of the past, and how unique the capabilities it provides are, let us take a look back at how R&E networking has developed in the United States over the past 35 years. In 1987, the initial NSFNET backbone provided just 56 kilobits per second of bandwidth. Even in 1991 only 1.5 megabits per second were available on the backbone, less than many current home broadband connections. Today, nationwide R&E networks have links of 10 Gigabits per second (Gbps), nearly 7000 times their capacity just 15 years ago. Yet it is increasingly apparent that even this is not enough capacity to meet emerging demands.

It is also important to realize that, tracing the development of today’s Internet back to the ARPANET of 1969, pioneers from the university community, with the support of government and industry, have provided leadership for network development to meet the needs of research and education. While many share in the development and evolution of the Internet as we know it today, university-based researchers played a key role both in developing fundamental Internet technologies and in providing large-scale testbeds that put those technologies to work and drove their further development.

In the early 1990s, the research and education community realized that it had lost control of this critical resource to the telecommunications industry. In conjunction with funding for universities’ high-performance networking connections from the National Science Foundation, and in partnership with industry and other government agencies, Internet2 was formed in 1996 and launched the nationwide Abilene network as a 2.4 Gbps backbone in 1998. Regional networks extended Abilene’s capabilities to university campuses, including an upgrade of Abilene to 10 Gbps in 2003.

Today, we have multiple networks for research and education running over multiple global, national, regional, and institutional infrastructures. Some of these networks, such as the Internet2 Abilene backbone, are high performance networks that are shared by researchers from numerous disciplines. Other, more specialized networks, such as ESnet, are designed to serve the needs of a subset of the entire research community.

An important common characteristic of all the R&E networks deployed over the past three decades is that most of the underlying infrastructure has not been owned by the research and education community. Rather, the networks have been built from leased circuits from traditional telecommunications companies. It has also been true that the capabilities required by leading-edge science and education have often not been available as off-the-shelf services by commercial providers. Therefore, to meet its requirements the R&E community has needed to cobble together circuits and services from multiple providers. As a consequence, research groups have historically spent significant amounts of time and energy developing and securing the networking capabilities they need before conducting the scientific research that is their end goal.

Sea of Change

There are two significant forces that are fundamentally changing the nature of research and education networking and providing an opportunity to reduce the amount of effort needed to provide scientists with the networking capabilities they need.

First, there is a growing urgency to develop new network technologies that scale to the growing needs of the worldwide R&E community and, later, to commodity Internet users. Undertaking this development requires an experimental testbed where network researchers can experiment with new approaches to all levels of networking technology. The results of this research will enable networks capable of supporting scientific projects in fields such as high-energy nuclear physics and radio astronomy, which require real-time collaboration among scientists and manipulation of enormous data sets. Already, individual projects in these fields can usefully consume a majority of the largest network links available. Together, even a few of them could potentially overwhelm existing advanced research and education networks. And, these kind of bandwidth-hungry applications are spreading. Applications in almost every discipline are now emerging with the same need for big, broadband networks.

The second big development is the fortuitous availability of dark fiber in the United States and elsewhere as a result of the downturn in the telecommunications industry. The last four years have provided the research and education community a historic opportunity to migrate from leasing circuits from traditional telecommunications carriers to owning fiber outright. This fiber, combined with optical dense wave division multiplexing (DWDM) technology enables multiple R&E networks to be built and run over the same fiber pair. Taken together, fiber ownership and DWDM change the dynamics of deploying and managing dedicated research networks to support demanding scientific applications and large-scale network research. In short, we now have the components for owning and controlling a robust optical network infrastructure that will support multiple, disparate networks.

U.S. Optical Network Infrastructure

Nationwide networks in several other countries and continents already have leveraged the combination of optical fiber and DWDM to deploy operational network infrastructures. Notable among these are CA*Net 4 in Canada through the CANARIE organization, SURFnet6 in the Netherlands through Stichting SURF, and AARNet in Australia. And others are emerging. For example, DANTE will soon be deploying the pan-European GÉANT2 network.

In the United States, campus and regional networks have led developments in this area since around 1995. A large number of institutions, especially research institutions such as the University of California Berkeley, have established on-campus, fiber-based network infrastructures that serve multiple networks. In many instances the institution goes far beyond the physical dimensions of the main campus and reaches out to university facilities in the surrounding community. Increasingly, these facilities are developed as part of the institutional infrastructure, such as that deployed by the University of California San Diego.

At the regional level in the United States, consortia of institutions within states like Texas and Florida have formed a not-for-profit corporation to undertake regional infrastructure development. California pioneered this model beginning in 1997 with the formation of the Corporation for Education Network Initiatives in California (CENIC), which brought together public and private universities, and community colleges. CENIC’s CalREN optical fiber-based infrastructure provides multiple networks to serve this wide range of constituencies. Across the United States, roughly 15,000 miles of fiber optic cable is controlled by regional network organizations. FiberCo, an organization created by Internet2, has been very instrumental in facilitating the acquisition of much of this fiber.

Although regional optical network infrastructure development emerged a few years ago, the formation of NLR has spurred a virtual explosion in the number of regional efforts. Less than three years ago the NLR was just a glimmer in the eyes of very few people in the United States. It started as a grassroots effort on the west coast to link Seattle with San Diego. It then evolved to have a redundant path via Seattle to Denver and Denver to Los Angeles. NLR evolved from a regional effort but recently NLR has stimulated new regional developments.

National LambdaRail

The mission of the NLR is to build an advanced, nationwide network infrastructure that will support many types and levels of networks for research, clinical, and educational fields. This infrastructure consists of 11,500 miles of fiber and optical networking equipment, all of it owned by NLR. The infrastructure supports both experimental and production networks, fosters networking research, promotes next-generation applications, and facilitates interconnectivity among regional and international high-performance research and education networks.

The NLR infrastructure is composed of 30 segments. Each segment can support at least 32 individual channels of light. On the northern routes, from Sunnyvale, California to Jacksonville, Florida an additional eight waves can be added. Each wave in each segment can support 10 Gbps, so there is the potential for 1072 channel-segments, each with 10 Gbps of capacity. Equally significantly is that each channel-segment operates independently and, therefore, can support networks with different operational characteristics.

Begun in 2004, phase one of deployment is complete, with the entire infrastructure on schedule to be finished by October 2005. Already, nearly 25 percent of the capacity is in use and it is anticipated that nearly 60 percent of the total capacity will be in use by late 2007. Planning for increased capacity and enhanced capabilities is already underway to ensure that NLR is always ready to meet the most demanding requirements of the research and education community.

NLR members and associates span a wide geographic and organizational range including individual universities; boards of regents; consortiums of institutions; not for profit corporations; a supercomputing center; a limited liability corporation; and Internet2, a not-for-profit organization that represents more than 300 organizations, including all the NLR members. In stark contrast to the government support provided to most R&E networking in the United States, NLR has been funded by the direct contributions of more than two dozen members and key corporate participants. The major strategic, corporate participant has been Cisco Systems. NLR would not have happened without the commitment of Cisco Systems to provide major resources, including optical equipment, routers, and switches. Cisco also provided early and ongoing support for, and focus on, advancing the network research. Level 3 Communications and WilTel Communications, as NLR’s predominate providers of fiber and related services, provided consideration in the acquisition of fiber and in providing the related services.

There are two main audiences for the NLR: network researchers and researchers involved in big science applications, including supercomputing. The focus on network researchers is a distinguishing characteristic of NLR. Fifty percent of NLR capacity is being devoted to support network research projects under the auspices of a network research advisory council led by NLR Chief Scientist David Farber of Carnegie Mellon University.

The NLR Network Research Council gathers thought leaders to guide NLR’s support of network research and provides a direct and enduring link to the community at the forefront of conceiving, developing, and testing revolutionary, not just evolutionary, networking technologies and capabilities. Directly engaging the network research community ensures that as the fundamental shift in networking to increasingly leverage optical technology continues, the NLR infrastructure will continue to be in the best possible position to support the investigations of cutting-edge network research — work that is not possible in the laboratory or any other national-scale network.

NLR already provides a unique, world-class nationwide testbed for network research. Dramatic experiments in new technologies such as dynamic wavelength provisioning and quantum encryption can be conducted without concerns about interrupting production services. Furthermore, the usage and performance of existing production services that use the NLR infrastructure can be examined in detail, providing the possibility for improving the capabilities of other networks that use those technologies.

The NLR infrastructure is already being used to support national-scale projects that require capabilities that today only NLR can provide:

  • The Extensible Terascale Facility (ETF) supported by the National Science Foundation, is a multi-million dollar, multi-year effort that has built and deployed the TeraGrid, a world-class networking, computing and storage infrastructure designed to engage the science and engineering community to catalyze new discoveries. The Pittsburgh Supercomputing Center, one of the original TeraGrid participants, was the first organization to use NLR to connect its facilities to the nationwide TeraGrid facility. Recently, the Texas Advanced Computing Center acquired a 10 Gbps wave from NLR to connect Austin to Chicago. Oak Ridge National Laboratory also is using NLR for back-up waves between Atlanta and Chicago as part of ETF.
  • The HOPI project of Internet2 is using NLR to explore the evolution of the Internet’s core. This project is engaging industry, regional, and international partners to examine a hybrid of packet switching and dynamically provisioned lambdas. It is using a wavelength on the entire NLR infrastructure footprint.
  • The CENIC organization and the Pacific Northwest GigaPOP are undertaking a joint project that uses NLR infrastructure to create, deploy, and operate Pacific Wave, an advanced, extensible peering facility along the entire Pacific Coast of the United States. Pacific Wave will create a new peering paradigm by removing the geographical barriers of traditional peering facilities. Pacific Wave will enable any U.S. or international network to connect at any location along the U.S. Pacific Coast facility, as well as the option to peer with any other Pacific Wave participant regardless of their physical connection.
  • The U.S. Department of Energy (DOE) UltraScience project is using NLR infrastructure to link Sunnyvale, Seattle, with Chicago. The UltraScience Net is an experimental research test bed funded by DOE’s Office of Science to develop networks with unprecedented capabilities to support distributed, large-scale science applications.
  • The OptIPuter is a powerful, distributed cyberinfrastructure supporting two major data-intensive scientific research and collaboration efforts in the Earth sciences and bioscience. OptIPuter is a five-year research program led by the University of California, San Diego and the University of Illinois at Chicago with several partners. NLR waves support the OptIPuter from University of California, San Diego and San Diego State University in the Southwest, to the University of Washington in the Northwest, to the University of Illinois at Chicago in the Midwest.
NLR and the Future of Research and Education

Cooperation and collaboration on common goals are the hallmark of the NLR. NLR provides a unique nationwide infrastructure that is able to provide the networking capabilities that are an increasingly critical part of the cyberinfrastructure required by the U.S. R&E community. This includes stable and reliable production networks at the regional, national, and international levels, as well as “breakable” experimental networks in support of network research. NLR also provides a locus for the symbiotic relationship between researchers using networking capabilities, and networking researchers looking to develop and test new network capabilities.

Large scale scientific applications, many driven by supercomputing, are becoming more routine. However, there is a looming collision between application requirements and network capacity. Ownership and control of the basic infrastructure can provide the most cost-effective way to meet the full range of networking needs. It provides a platform for researchers to spend the least amount of time possible working on connecting participants in large-scale research efforts.

An historic opportunity exists for the R&E community to leverage technology and achieve control over advanced network resources. This is an opportunity not only to meet today’s needs but also to lay the foundation for a new round of innovation. The R&E community has historically led the way in advanced networking and it can continue to do so.