Why and Why Not Cloud

Lets try to understand the benefits of cloud computing through its fundamentals. 

To understand cloud computing it will be a wise thing to understand what forced its invention.  Let us see how things were working before cloud so that you understand the real need of cloud and key benefits that it brings, in plain simple language and free of jargons.

Fig.1 shows a user accessing an application over the internet.  Application could be any application,  but for this example let us say it is Amazon.com, which is world’s largest online retailer. So in this case it look very simple a user is accessing Amazon.com, browsing and buying some products; remember this is an era when cloud computing was not invented.

What we are interested in is  how the Application is hosted, ie the details of the server, the resources in the server in terms of memory, storage, cpu etc and also how the application is using these resources. Again, remember we are talking about the era before cloud computing was born.

Figure:2

As fig. 2 shows for running this application we need a hardware server with some amount of cpu, memory and storage which will be utilized by an Operating System like Windows or Linux.  But how do somebody come up with the right amount of those resources? Yes off course the developer know the resources that is required for the application to run for a fixed number of requests that need to be processed by the application.  

Figure: 3

But since this is Amazon the number of requests vary drastically across different seasons, sometimes differ drastically even within different hours of the same day. As in fig.3 what happens when there are 1000s or even millions of requests. The only way to accommodate that in the pre cloud computing era is by forecasting the highest number of requests that are going to come in and buy the server or upgrade the server with such resources. For example do capacity planning and say for 1million requests to be handled we need a high end server with a huge amount of CPU,  MEMORY and Storage.

Maybe you have guessed the problem already with this approach. Yes, apart from the time there is a huge amount of traffic these resources are underutilized. Businesses have to pay for these servers and it gets used to its fullest capacity once in those holiday seasons when there is maximum traffic/requests. There are further more problems. Let us say the architect found a new or better database for the application. He has to refactor the complete application and repackage application to install it again.  When he does the reinstallation he has to make sure the server on which currently the application is running supports the new database. If it does not support the new database then the  business needs to buy a new server or atleast there may be a need for some hardware component upgrade which costs more money now.

So let us list down the list of problem with this approach:-

  • Scaling Issue: As we saw, we need a high-end system with loads of memory/CPU/storage to address high traffic and companies or the network/application architects do a capacity planning. But what if for some reason traffic is higher than the anticipated traffic at the time of capacity planning. Then the company has to procure new hardwares again to address these needs.  Which in technical terms we say not easily scalable.
  • Flexibility Issue:  Depending on the need we are not able to adjust the resource and expense. Which means if traffic is low I should be able to use less resource and if traffic is more I should be able to use more resources, which is not possible with this model because to increase or decrease the resource we need to make hardware changes. In technical term we call this as not flexible.
  • Security: Remember these servers are physically accessed by humans in the datacenter. When there is change that needs to be done on the hardware those are accessed physically. Also in this model there is no specific granular restriction on from where the server and its content are accessed.All the data in the server have almost same access restrictions. Let us call this as Security Concerns.
  • Visibility As we already saw as part of security almost all data residing on the server has same access restriction. Who access it, which part of the entire data was accessed, at what time these details are not provided with this model. Let us call this as visibility.

When there are many more benefits for cloud let us stick to these 5 items for the time being.

When companies were running their businesses with these kind of issues, there already existed a technology which did not have its real usage for a very long time. That technology is Virtualization. What virtualization does is, it allows multiple OS to run on the same hardware by providing virtual instances of our limited resources: CPU, Memory and Storage. Applications can be installed on these OS. 

That was a great shift and saved a few of the issues to an extent. The architecture changed from previous one to a new one as in below figure 4. I can have multiple application may be same application that can do load balance the multiple request. But remember still there is a limit on the total CPU,RAM and Storage, because you can virtualize to the max of how much hardware RAM/CPU/Storage you have. That means the problem  above we listed is still not solved.

Figure: 4

Then how about if I can have a pool these resources, and then run some automation to help create the virtual resources and deploy application as and when needed? Is it possible? If it is possible then companies can rent out that service to store the data. 

It is possible and that gave birth to today’s data center “Cloud” based data center.

It will look like in figure 5. Pools of resources shared and cab create the application using the required resources.  

User should be able to create virtual CPU, virtual Storage, virtual Memory through few clicks or programmatically. In case they want to increase any of the these virtual resources they can do it in few clicks or by running a small program. On top of those virtual resources operating system and application can be installed.

Let us look back to those 5 issues and see if this new approach is solving it or not.

  • Cost: with few clicks or by running a program we can increase or reduce the resources as needed. The new approach or model can be now be adjusted to bill the customer based on the resources that have been consumed.  Taking the earlier example of users trying to buy items from Amazon.com, we can program to check the amount of traffic coming in and say need more resources as we see the more traffic. Companies need not buy large configuration server just because in some seasons traffic is going to go higher. Pay as you use.
  • Scaling Issue: whenever more resources are needed you can add instantaneously as we saw earlier. We can even have a program to do it and no need of human to look at it. Auto scaling .
  • Flexibility Issue: Resources are added or removed as needed based on the environment. Which means now we have flexibility and elasticity 
  • Security: The component needed for application to work like DB, front end infrastructure etc can be hosted on different hosts and provided very specific access right.  Also for expanding the resources it is just few clicks and no human physical intervention is needed. This provides granular security.
  • Visibility: with granular access control comes deeper visibility. Who accessed and what time each of the application was accessed is looged. Sometime it may not be user accessing the application,  it may be an application using another application. All those are monitored and logged which provide deeper visibility. 

That is the world of cloud.

Leave a Reply

Your email address will not be published. Required fields are marked *