BACK TO THE FEATURE INDEX

Editor's note: In a companion article, Next Wave's Anne Forde talks to two European scientists working with and developing grid computing technology.

Grid computing is an emerging technology that transforms a computer infrastructure into an integrated, pervasive virtual environment for dynamic collaboration and shared resources anywhere in the world--providing users, especially in science, with unprecedented computing power, services and information. For scientists planning a research career, grid computing offers the prospect of much more computing power in a collaborative environment at a small fraction of the cost than before. Learning more about grid computing can also provide important new insights and skills that can benefit scientists' careers.

What is grid computing?

Grid computing is based on three simple concepts:

  • Virtualization, severing the hard-coded association of resources to systems

  • Resource allocation and management, dynamically allocating resources on demand, and managing them

  • Provisioning, configuring resources whenever and wherever needed.

Grid computing adapts to and dynamically aligns with research or business needs; it takes advantage of widespread spare computing capacity and the ubiquity of the Internet, which can tie these resources together. But the advantages of grid computing don't stop with this dynamic alignment. Today, applications are independently constructed, custom configured, and sized for peak load. That peak load may only occur once a month, once a quarter, or even once a year, leaving resources underutilized most of the time. This underutilization is accepted as a tradeoff and forces people to choose between having enough scalability for peak loads and not buying too much so that they've invested a lot of money in idle capacity. This is the problem that grid computing solves.

Dynamic scaling is another basic component of grid computing based on the idea that because the grid computer infrastructure can be built of small, standard, interchangeable components, computer users can start small and then simply add more components as they scale. This means that grid computing can happen incrementally.

The Origins of Grid Computing

Grid computing, just like the Internet, was born out of the research and academic communities, driven by their real need to collaborate and share computational resources. Early Grid computing enabled research institutions to more efficiently share not only the vast amounts of data but also computational resources that help them create the data. This not only enabled institutions to shrink huge computational cycle times but also refined and improved the quality of their collaboration.

The Grid has its roots in the batch schedulers and cycle scavengers of the late 1980s and early 1990s. A second wave of evolution in the late nineties saw decentralized technologies such as peer-to-peer intersect with a growing demand for federated standards and the ability to deliver and charge for computing as if it were electricity or water, like a public utility. Electric power distribution uses this same federated architecture for sharing resources among utilities ? called electric power grids ? which is where Grid computing takes its name.

While the Internet enables computers to communicate with each other, Grid computing enables computers to work together. Grid computing is the next evolution of the Internet integrated with the next generation of distributed computing, peer-to-peer computing, virtual collaboration technologies and Web services, resulting in new, more powerful ways to conduct business and research.

Grid Computing: Economic Drivers

During the past decade, enterprises and institutions invested heavily in their IT infrastructure in order to support the increasing demands of their internal users and customers. The result of these investments has been development of an infrastructure that tends to be highly distributed but, in many cases, not easily managed.

A sizeable portion of all enterprise computing has been estimated to run at only 20% to 40% of its total capacity. This costly condition-- a paid-for, yet idle, computing resource--has often resulted from three factors:

  • Only one application deployed per operating environment

  • Systems were sized with performance head room in mind, as forecasted from more demanding workloads and

  • The duration of peak loads was often short.

With the introduction of simpler technologies, greater bandwidth, automation, and more easily configurable infrastructures, Grid computing provides a more efficient means of provisioning IT services and manage them very cost effectively.

The impact of leveraging Grid computing will be dramatic. With the inherent ability of a Grid computing to provide nearly 100% uptime at an expected fraction of the costs of managing today's more static and fixed environment, both enterprises and institutional service providers can reap tremendous benefits.

Also, there is a particular appeal to being able to intelligently allocate finite resources to the appropriate business or research applications in a distributed enterprise. This technique offers companies and institutions flexibility, whether it is in the form of being able to redistribute resources to address new opportunities, or to enable applications to better serve current clients.

A Sampling of Grid Computing Projects in Science

UK e-Science Programme The UK e-Science Programme sponsors a number of projects across several disciplines that make use of Grid computing technologies:

  • AstroGrid, enabling the virtual observatory

  • Axiope, data management and data sharing

  • BioSimGrid, making large biomolecular simulations accessible

  • ClimatePrediction.Net, addressing climatic uncertainty

  • Comb-e-Chem, exploiting combinatorial chemistry

  • DAME, diagnostics and decision support across the Grid

  • Discovery Net, information from high-throughput devices

  • e-Diamond, state-of-the-art technology for breast screening

  • e-Family, access to protein sequences and structures

  • GENIE, integrating earth models to give a complete picture

  • GridPP, a Grid for particle physics

  • IXI, Medical imaging on the Grid

Large Hadron Collider Computing Grid Project The world's largest and most powerful particle accelerator, the Large Hadron Collider (LHC), is being constructed at CERN, the European Organization for Nuclear Research, near Geneva. LHC will have enormous computational needs, and plans to meet those needs by deploying a worldwide computational Grid service, integrating the capacity of scientific computing centers spread across Europe, America and Asia into a virtual computing organization.

myGrid myGrid is a project targeted at developing open-source software to support personalized in silico experiments in biology on a Grid. A number of BioGrid projects are under way, including the Asia Pacific BioGrid Initiative, the North Carolina BioGrid, the Canadian BioGrid, the EUROGRID project and the Biomedical Informatics Research Network.

DOE Science Grid The DOE Science Grid is being developed and deployed across the U.S. Department of Energy, to provide an infrastructure to service advanced scientific applications and problem-solving frameworks.

Globus Alliance The Globus Alliance is a research and development project focused on enabling the application of Grid concepts to scientific and engineering computing. The group is engaged in building Grid applications, conducting research on technical and management challenges facing Grid computing, developing software (such as its Globus Toolkit) to support Grid computing, and proposing standards for further Grid development.

Distributed computing network projects

These undertakings ask individuals to make spare capacity on their PCs available for research.

ClimatePrediction.net

Compute Against Cancer

Fight AIDS @ Home

Genome @ home

SETI @ home

Clusters vs. Grids

Many people confuse Grid computing with cluster-based computing, but cluster computing is not truly distributed computing. Cluster computing ties together similar types of resources in a data center with similar operating systems through special purpose connectors to deliver a specific application. Grid computing, in contrast, offers heterogeneity by supporting different software without the need for special connectors.

Grids are dynamic in nature, while clusters typically contain a static number of processors. There is no ability to dynamically add or remove resources in the cluster environment. Grids are distributed over local or wide area networks, and can also dynamically adding and removing resources without interrupting the application services. Grid workload management software from Optena (my company), Platform, United Devices, and Sun Grid Engine are able to distribute this complex workload to multitude hardware and configurations.

Grid Computing Standards

As large-scale shared-resource complexes became more commonplace, efforts to standardize the infrastructure and services began. Academic institutions were at the forefront of developing first-generation computing platforms based on Grid computing and set the stage for wide use of Grid computing in industry like they did it with the Internet. Soon enterprises as well as research organizations recognized the emerging Grid as a viable computing technology to solve problems today and provide a clear evolutionary path to the future.

As adoption grew, industry leaders and research institutions have become heavily involved in developing and refining standards. While the Global Grid Forum acts as a clearinghouse of Grid standards, industry leaders also formed an Enterprise Grid Alliance, a consortium founded by Oracle, Optena, HP, Sun, and others to define standards and interfaces required to encourage Grid computing for enterprise applications. IBM has also played a significant role in the development of the standards and contributed to the development of the open-source standards-based software, called the Globus Toolkit. As a result, many technology vendors have started incorporating Grid technology across all lines of their business including services, hardware, and software.

The Future

There are over 400 million PCs around the world, and thus almost every organization is sitting atop enormous, widely distributed, unused computing capacity. An innovative example of a project making use of this unused capacity is SETI@Home, the most popular and well-known distributed computing project on the Internet. This initiative harnesses idle PC computing cycles to work on the search for extraterrestrial intelligence. SETI@Home is now running on more than 2.5 million PCs in 226 countries.

Virtualization, the driving force behind Grid computing, can help reach this unused capacity. Web services, the foundation of dynamic service delivery mechanism in Grid computing, enables enterprises to deliver software as a service more economically and dynamically that can scale in or out as demand shrinks or grows. Third-generation Grid computing platforms (such as GridSpaces made by Optena) take advantage of these advances in distributed computing.

Although Internet and Grid computing are both new technologies, they have already proven themselves useful and their future looks promising. As technologies, networks and business models mature, I expect that grids will become commonplace for small and large enterprises and research institutions linking their various resources to support human communication, data access and computation. I also expect to see Grid computing emerging as a common service delivery platform as the Internet did for Web pages.

Surendra Reddy is founder and CEO of Optena Corp. in San Jose, California. Before founding Optena Corp., Reddy worked at Oracle Corp. Reddy and Prof. Miron Livny at University of Wisconsin have been working together since 1998 to build an enterprise Grid computing platform.