Meet Paul. He’s a software developer and like most people in his trade, he works on the cloud. This gives him access to really powerful computing capabilities without actually having to invest in a supercomputer. It also allows him to collaborate with team members across the globe – they don’t have to physically be in the same office anymore.
Microsoft Azure is a prime example of a private and public cloud platform used by people like Paul.
But, just how does this work?
A traditional computer has a tight connection between the CPU and operating system – the hardware and accompanying functionality is permanently or semi-permanently encoded on silicone. Azure splits this connection by introducing an abstraction layer, called the hypervisor.
The hypervisor is a smart piece of coding simulating a real computer with all its functions, without the actual hardware. This is what we call a “virtual machine”. Multiple virtual machines can be run at the same time and are able to run any compatible operating system, such as Windows or Linux. In effect, this gives people like Paul access to an incredibly powerful computer without having to purchase one.
Azure’s computing capacity is enormous, since it repeats this virtualization process over and over again in multiple datacenters across the globe. Each of these contains actual hardware that virtually simulates specialist hardware on behalf of customers like Paul, as and when needed.
Datacenter architecture is as follows: multiple servers are located on server racks. Each of these has many server blades, a network switch and a PDU (power distribution unit). The blades are the “brains” where computation is executed with the help of hypervisors, while the network switch ensures that the blades are connected to the network. The PDU provides power. Server racks are often grouped together in clusters for ease of management.
Most servers in a cluster are used to run virtualized hardware instances, while a portion of servers are set aside to function as a fabric controller – more on this later.
Overseeing all of this, and connected to the fabric controller, is the orchestrator. Just like the conductor in an orchestra, the orchestrator manages workload resulting from user requests. Users like Paul and his team connect to the orchestrator via a designated API (application programming interface), which can be done using one of a selection of available tools, such as the Azure portal.
When Paul requests a virtual machine, the orchestrator determines what’s needed for this, packages everything and sends it to the optimum server. Here, the package is delivered to the fabric controller. The fabric controller creates the virtual machine and allows Paul access to this machine.
Using virtual machines in this manner creates agility for Paul and his team in every step of their workflow – development, deployment and management of applications and services. Unfortunately, this has a serious, unintended downside: often, unauthorized resources are created or resources are left running when they’re not in use. Cloud resources tend to be “on” by default.
Since cloud service providers, like Microsoft Azure, incur costs to run these resources, clients – like Paul’s company – are charged for the time a resource is “on”. This leads to Paul’s company paying for resources when they are not in use, much like leaving your lights on at home when you’re not there. Resources wasted in this manner can account for as much as 65% of Paul’s employer’s cloud expenditure. Taking into account the amount of resources medium to large companies have on the cloud, this adds up to a hefty sum.
CloudSnooze is here to help. We are able to automatically identify and eliminate your wasted cloud resources. The funds Paul’s company – and you, if you work with us – save here can now be applied where it matters: growing your business. To find out more on how we can help you Snooze and Save, visit our website.
Share This News