What is Utility computing? How is it different from distributed computing and cloud computing? Compute utility or utility computing is a service provisioning model in which the resources and infrastructure available to the user when the customer actually needing the resource.
As compared to cloud computing which works on on-demand-service-model, utility computing is similar but the difference here is that utility model seeks to maximize the resource efficiency and to minimize the overall associated cost.
Definition of Utility Computing
Utility computing in simple words can be renting of computer resources such as hardware, software, and network when the client is required. This idea of utilizing resources was first proposed by American computer scientist John McCarthy in 1961. He said “If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility… The computer utility could become the basis of a new and important industry.”
Utility computing is a pay per use processing in which customers access the computers in the data centers using a private network and are dually charged for a number of resources they are using. This type of computing is also called on-demand service as you can scale up and scale down according to your requirement.
If the service provider includes the application programs, they are called application service provider which falls under the umbrella of “Software as a Service”. The service provider can provide both software as a service or hardware as a service to the customers. You must have read about Amazon EC2 (Elastic Compute Cloud) which is one of the best service model provided by Amazon to its customers.
Utility computing solutions can include providing resources on demand which include virtual servers, virtual storage, backups, virtual software and other IT solutions.
Utility Computing Advantages
The advantages of utility computing are:
- On-demand service
- Pay per use model
- Resources are not kept idle
- You can upgrade or downgrade according to your IT requirements
- Elasticity of systems
- Always available
Difference Between Grid Computing and Utility Computing
In the case of grid computing, all the computers (nodes) are connected to each other to perform a specific task. In grid computing, each computer uses its individual resources and memory. Grid computing is implemented at that places where there is a requirement of high computing and there is no deadline of time to solve the problem.
In the case of utility computing, the resources are shared and if one of the users is not using the resource, other can access the same resources. It should be noted that utility computing can be implemented without implementation of cloud computing.
Grid computing can be considered as a weaker form of cloud computing in which virtualization of resources can be seen. If any node fails in grid computing, the instance of the program running on a different node is not affected only there will slightly decrease the performance of the program.
Utility Computing architecture frameworkThe above diagram shows the architecture framework of cloud computing, grid computing, utility computing and centralized computing.
Example of Utility Computing
One of the most simple examples of utility computing can be understood of a supercomputer. If the supercomputer rents the computing power of itself to clients, then anyone can use the computing speed of supercomputer and has to pay according to the time he/she is using the computing power.
Ar the end of the discussion, we can say that utility computing can be considered more of a business model than simply a technology. Cloud computing supports utility computing but the vice versa is not applicable.