The SETI@home project, launched in 1999, is a widely-known example of a very simple grid computing project. Although it was not the first to use such techniques, and doesn't use all of the facilities of current grid capabilities, it has been followed by many others, covering tasks such as protein folding, research into drugs for cancer, mathematical problems and climate models. Most of these projects work by running as a screensaver on users' personal computers, which process small pieces of the overall data while the computer is either completely idle or lightly used. The first general purpose commercial grid (U.S. patent 6,463,457) was launched by Parabon Computation in 1999. A "general purpose" grid is a grid that is not "hardwired" to perform a specific task. For example, SETI@home's screensaver contains both code to processes radio telescope data and code to handle retrieving work and returning results. The two bodies of code are intertwined into a single program. In a general purpose grid, only the code required for retrieving work and returning results persists on the nodes. Code required to perform the distributed work is sent to the nodes separately. In this way, the nodes of a general purpose grid can be easily reprogrammed.
Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate, often desktop, computers treated as a virtual cluster embedded in a distributed telecommunications infrastructure. Grid computing's focus on the ability to support computation across administrative domains sets it apart from traditional computer clusters or traditional distributed computing.
Grids offer a way to solve Grand Challenge problems like protein folding, financial modelling, earthquake simulation, climate/weather modelling etc. Grids offer a way of using the information technology resources optimally inside an organisation. They also provide a means for offering information technology as a utility bureau for commercial and non-commercial clients, with those clients paying only for what they use, as with electricity or water.
Grid computing has the design goal of solving problems too big for any single supercomputer, whilst retaining the flexibility to work on multiple smaller problems. Thus grid computing provides a multi-user environment. Its secondary aims are: better exploitation of the available computing power, and catering for the intermittent demands of large computational exercises.
Grid computing involves sharing heterogeneous resources (based on different platforms, hardware/software architectures, and computer languages), located in different places belonging to different administrative domains over a network using open standards. In short, it involves virtualizing computing resources.
Grid computing is often confused with cluster computing. The key difference is the resources which comprise the grid are not all within the same administrative domain.
Functionally, one can classify grids into several types:
- Computational Grids (including CPU scavenging grids), which focuses primarily on computationally-intensive operations.
- Data grids, or the controlled sharing and management of large amounts of distributed data.
- Equipment Grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyse the data produced.
Definitions of Grid Computing
The term Grid Computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power Grid.
Today, there are many definitions of the term:Grid computing:
- Buyya,“A type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements” .
- CERN, who created the World Wide Web, talk of The Grid: "a service for sharing computer power and data storage capacity over the Internet."
- Pragmatically, grid computing is attractive to geographically-distributed non-profit collaborative research efforts like the NCSA Bioinformatics Grids such as BIRN: external grids.
- Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal grids.
Platform computing suggested a three stage model of Departmental Grids, Enterprise Grids and Global Grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise grids where non-technical staffs computing resources can be used for cycle-stealing and storage. A global grid is a connection of enterprise and departmental grids which can be used in a commerical or collaborative manner.
The Global Grid Forum
The Global Grid Forum (GGF) has the purpose of defining specifications for grid computing. GGF is a collaboration between industry and academia with significant support from both.
The Globus Alliance
The Globus Alliance implements some of the standards developed at the GGF through the Globus Toolkit, which has become the de facto standard for grid middleware. As a middleware component, it provides a standard platform for services to build upon, but grid computing needs other components as well, and many other tools operate to support a successful Grid environment. This situation resembles that of TCP/IP: the usefulness of the Internet emerged both from the success of TCP/IP and the establishment of applications such as newsgroups and webpages.
Globus has implementations of the GGF-defined protocols to provide:
- Resource management: Grid Resource Allocation & Management Protocol (GRAM)
- Information Services: Monitoring and Discovery Service (MDS)
- Security Services: Grid Security Infrastructure (GSI)
- Data Movement and Management: Global Access to Secondary Storage (GASS) and GridFTP
A number of tools function along with Globus to make grid computing a more robust platform, useful to high-performance computing communities. They include:
- Grid Portal Software such as GridPort and OGCE
- Grid Packaging Toolkit (GPT)
- MPICH-G2 (Grid Enabled MPI)
- Network Weather Service (NWS) (Quality-of-Service monitoring and statistics)
- Condor (CPU Cycle Scavenging) and Condor-G (Job Submission)
- Moab Grid Suite
XML-based web services offer a way to access the diverse services/applications in a distributed environment. As of 2003 the worlds of grid computing and of web services have started to converge to offer Grid as a web service (Grid Service). The Open Grid Services Architecture (OGSA) has defined this environment, which will offer several functionalities adhering to the semantics of the Grid Service. The vision of OGSA is to describe and to build a well-defined suite of standard interfaces and behaviours that serve as a common framework for all Grid-enabled systems and applications.
Commercial grid computing offerings
Computing vendors have, in the 2000s, begun to offer grid solutions which are either based on the Globus Toolkit, or their own proprietary architecture. Confusion remains: in that vendors may badge their computing on demand or cluster offerings as grid computing.
Key vendors in grid computing:
- Cluster Resources, Inc.
- Moab Grid Suite
- Parabon Computation 
- IBM Grid Computing website
- Sun Microsystems Grid Computing website
- Oracle Corp. "Oracle Grid"
- HP Grid Computing
- United Devices 
- Platform Computing 
- 1st Port for Grid Computing (UK)
- Gigaspaces Enterprise Application Grid
- Mobile Agent Technologies - AgentOS
Grid computing reflects a conceptual framework rather than a physical resource. The Grid approach is utilized to provision a computational task with administratively-distant resources. The focus of Grid technology is associated with the issues and requirements of flexible computational provisioning beyond the local (home) administrative domain.
Like the Internet, the Grid concept evolved from the computational needs of 'big science'. The Internet was developed to meet the need for a common communication medium between large, federally funded, computing centers. These communication links led to resource and information sharing between these centers and eventually to provide access to them for additional users. Ad hoc resource sharing 'procedures' among these original groups pointed the way toward standardisation of the protocols needed to communicate between ANY administrative domain. The current Grid technology can be viewed as an extension or application of this framework to create a more generic resource sharing context.
The non-profit SETI@home project is one of the most well-known scientific causes designed to make use of idle CPU cycles even though it was not the first to pioneer the technique (other non-profit projects like distributed.net preceded SETI@home). These programs generally run in the background or as a screensaver when the user does not use the entire computing power of the PC. Many such projects have made progress in fields that would have otherwise taken prohibitive investment or a delay in/on results.
A Grid environment is created to address resource needs; the use of that resource(s) (ie. CPU cycles, disk storage, data, software programs, peripherals, etc.) is usually characterized by its availability outside of the context of the local administrative domain. This 'external provisioning' approach entails creating a new administrative domain referred to as a Virtual Organization (VO) with a distinct and separate set of administrative policies (home administration policies plus external resource administrative policies equals the VO [aka your Grid] administrative policies). The context for a Grid 'job execution' is distinguished by the requirements created when operating outside of the home administrative context. Grid technology (aka. middleware) is employed to facilitate formalizing and complying with the Grid context associated with your application execution.
One characteristic that currently distinguishes Grid computing from distributed computing is the abstraction of a 'distributed resource' into a Grid resource. One result of abstraction is that it allows resource substitution to be more easily accomplished. Some of the overhead associated with this flexibility is reflected in the middleware layer and the temporal latency associated with the access of a Grid (or any distributed) resource. This overhead, especially the temporal latency, must be evaluated in terms of the impact on computational performance when a Grid resource is employed.
Web based resources or Web based resource access is an appealing approach to Grid resource provisioning. A recent GGF Grid middleware evolutionary development 're-factored' the architecture/design of the Grid resource concept to reflect using the W3C WSDL (Web Service Description Language) to implement the concept of a WS-Resource. The stateless nature of the Web, while enhancing the ability to scale, can be a concern for applications that migrate from a stateful resource access context to the Web-based stateless resource access context. The GGF WS-Resource concept includes discussions on accommodating the statelessness associated with Web resources access.
The conceptual framework and ancillary infrastructure are evolving at a fast pace and include international participation. The business sector is actively involved in commercialization of the Grid framework. The 'big science' sector is actively addressing the development environment and resource (aka performance) monitoring aspects. Activity is also observed in providing grid-enabled versions of HPC (High Performance Computing) tools. Activity in the domains of 'little science' appears to be scant at this time. The treatment in the GGF documentation series reflects the HPC roots of the Grid concept framework; this bias should not be interpreted as a restriction in the application of the Grid conceptual framework in its application to other research domains or other computational contexts.
- Cluster Resources, Inc.
- List of distributed computing projects
- Web Services Resource Framework (WSRF)
- Object Management Group
- Parallel Virtual Machine (PVM)
- Message Passing Interface (MPI)
- Distributed computing
- Render farm
- Semantic Grid
- computer cluster
- Sun GridEngine
- virtual organization
- SDSC Storage resource broker (data grid)
- The Java Commodity Grid Toolkit (CoG) Kit
- information grid
- Grid and cluster computing using Java (LGPL)
- Media Grid™
- Open Grid Services Architecture (OGSA)
- Open Grid Services Infrastructure (OGSI)
- Enabling Grids for E-sciencE (EGEE)
- Cluster Resources
- Cluster Builder
- Antony Davies: Computational Intermediation and the Evolution of Computation as a Commodity, Applied Economics, June 2004, Online version
- Ian Foster, Carl Kesselman: The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmann Publishers, ISBN 1558604758, Website
- Pawel Plaszczak, Rich Wellner,Jr: Grid Computing “The Savvy Manager’s Guide” Morgan Kaufmann Publishers, ISBN: 0127425039 Online book companion
- Fran Berman, Anthony J. G. Hey, Geoffrey Fox: Grid Computing: Making The Global Infrastructure a Reality, Wiley, ISBN 0470853190, Online version
- Maozhen Li, Mark A. Baker: The Grid: Core Technologies, Wiley, ISBN 0470094176, Website
- CERN: The Grid Café - What is Grid?, viewed 04 Feb 2005.
- Roger Smith: Grid Computing: A Brief Technology Analysis, CTO Network Library, 2005.
- Tools, Frameworks, Middleware
- Globus Toolkit
- Java Commodity Grid Toolkit (CoG) Kit
- ProActive is a Java library for parallel, distributed, and concurrent computing with mobility and security
- Grid Engine, open source grid engine sponsored by Sun Microsystems, runs on many platforms
- Apple Xgrid, an easy-to-set-up grid solution for Mac OS X
- BioSimGrid: Grid database for biomolecular simulations
- The Condor project is a grid computing engine by the University of Wisconsin, and runs on many platforms
- Mobius Data Grid middleware
- The OGSA-DAI Data virtualisation project
- Moab Grid Suite
- Berkeley Open Infrastructure for Network Computing
- Projects for end-user participation (see also the List of distributed computing projects for more)
- Einstein@Home Search data from the Laser Interferometer Gravitational wave Observatory (LIGO) in the US and from the GEO 600 gravitational wave observatory in Germany for signals coming from rapidly rotating neutron stars, known as pulsars.
- LHC@home Improve the design of the CERN LHC particle accelerator.
- Climateprediction.net Improve the accuracy of long-term climate prediction.
- Predictor@home Solve biomedical questions and investigate protein-related diseases.
- How you can fight against diseases using your computer.
- WorldCommunityGrid.org A more recently created grid with the aim of running multiple projects on a single grid. From the home page "World Community Grid's mission is to create the largest public computing grid benefiting humanity. Our work is built on the belief that technological innovation combined with visionary scientific research and large-scale volunteerism can change our world for the better. "