Why virtualisation isn’t enough in cloud computing

by admin on August 26, 2016

While it is generally recognised that virtualisation is an important step in the move to cloud computing, as it enables efficient use of the underlying hardware and allows for true scalability,  for virtualisation in order to be truly valuable it really needs to understand the workloads that run on it and offer clear visibility of both the virtual and physical worlds.

On its own, virtualisation does not lend itself to creating sufficient visibility about the multiple applications and services running at any one time. For this reason a primitive automation system could cause a number of errors to occur, such as the spinning up of another virtual machine to offset the load on enterprise applications that are presumed to be overloaded.

Well that’s the argument that was presented by Karthikeyan Subramaniam in his Infoworld article last year, and his viewpoint is supported by experts at converged cloud vendor VCE.

“I agree absolutely because server virtualisation has created an unprecedented shift and transformation in the way datacentres are provisioned and managed”, affirms Archie Hendryx – VCE’s Principal vArchitect. He adds that, “server virtualisation has brought with it a new layer of abstraction and consequently a new challenge to monitor and optimise applications.”

Hendryx has also experienced first hand how customers address this challenge “as a converged architecture enables customers to quickly embark on a virtualisation journey that mitigates risks and ensures that they increase their P to V ratio compared to standard deployments.”

In his view there’s a need to develop new ways of monitoring provides end users more visibility concerning the complexities of their applications, their interdependencies and how they correlate with the virtualised infrastructure. “Our customers are now looking at how they can bring an end-to-end monitoring solution to their virtualised infrastructure and applications to their environments”, he says. In his experience this is because customers want their applications to have the same benefits of orchestration, automation, resource distribution and reclamation that they obtained with their hypervisor.

Make abstraction work

Clive Longbottom, Clients Services Director of analyst firm Quocirca, also agrees because virtualisation just applies a layer between the hardware and the software. “Far more is required to make that abstraction do something to add value to the overall platform, because elasticity is required to allow the resources to be shared”, he explains. This entails having some virtual machine mobility in order to have the capacity to deal with resource constraints through starting up new images or by moving them to new areas of the virtual estate. He therefore adds that predictive intelligence is necessary to ensure that virtual machines aren’t started up in a way that would cause the total collapse of the virtual environment.

“All of this needs tying into the capabilities of the facility, such as power distribution, cooling etc. which are just as important to a virtual platform as the server, storage and networking kit – and this requires some form of DCIM tooling to be present”, suggests Longbottom.

He adds that a hypervisor doesn’t lend visibility into what services are running inside a virtual machine because the virtual machine sits on top of the stack that sits on top of the hypervisor. This means that some proper systems and application management tools are required that enable an understanding of the virtual and physical worlds to ensure that everything works smoothly together.

Virtual and physical correlations

Hendryx adds: “By having a hypervisor you would have several operating system (OS) instances and applications. So for visibility you would need to correlate what is occurring on the virtual machine and the underlying physical server, with what is happening with the numerous applications.” He therefore believes that the challenge is to try to understand the behaviour of an underlying hypervisor that has several applications running simultaneously on it. For example, if a memory issue were to arise relating to an operating system of a virtual machine, it would be possible to find that the application either has no memory left, or it might be constrained, yet the hypervisor might still present metrics that there is sufficient memory available.

Hendryx says these situations are quite common: “This is because the memory metrics – from a hypervisor perspective – are not reflective of the application as the hypervisor has no visibility into how its virtual machines are using their allocated memory.” The problem being that the hypervisor has no knowledge of whether the memory it allocated to a virtual machine is, for cache, paging or pooled memory. What it understands in actuality is that it has made provision for memory and this is why errors can often occur.

Complexities

This lack of inherent visibility and correlation between the hypervisor, the operating system and the applications that run them could cause another virtual machine to spin up. “This disparity occurs because setting up a complex group of applications is far more complicated than setting up a virtual machine”, says Hendryx. There is no point in cloning a virtual machine with an encapsulated virtual machine either; this approach just won’t work, and that’s because it will fail to address what he describes as “the complexity of multi-tiered applications and their dynamically changing workloads.”

It’s therefore a must to have some application monitoring in place that correlates with the metrics that are being constantly monitored by the hypervisor and the application interdependencies.

“The other error that commonly occurs is caused when the process associated with provisioning is flawed and not addressed”, he comments. When this occurs the automation of that process will remain unsound to the extent that further issues may arise. He adds that automation from a virtual machine level will fail to allocate its resources adequately to the key applications and this will have a negative impact on response times and throughput – leading to poor performance.

Error prevention

Software-defined networks (SDNs) can prevent these situations from occurring. Nigel Moulton, VCE’s chief technology officer for the EMEA region explains: “SDN when combined with a software-defined datacentre (SDDC) provides a vision for a very high degree of automation within a datacentre and the networks that connect applications running on it to the users.  Well written applications that to adhere to standards being refined for these environments will go a long way to ensuring an error free environment.” Errors may occur in some situations and so he rightly says that it’s imperative to have the ability to quickly troubleshoot and fix them.

“In this sense a wholly virtualised environment where the hypervisor provides most if not all of the functions such as a firewall, load-balancing, routing and switching does not lend itself to the optimum resolution of a problem, as the hypervisor may provide too much abstraction from the hardware for low latency and highly complex functions”, he comments. For this reason he advises that there should be a blend of hardware control and visibility, which is tightly woven into a virtualised environment because it would be more effective from a troubleshooting perspective.

Longbottom agrees that SDN can help. He adds that it requires some “software-defined computing and storage to bring it all together under the same SDDC.” Yet he says they amount to abstractions that are situated away from the physical side of it all, and so there is a need for something that deals with the physical side. This shouldn’t involve a separate system because both the physical and logical aspects have to work and be managed together. “Only through this can a virtualised platform deliver on its promise”, he says.

Possible solutions

According to Hendryx, VCE has ensured customers have visibility within a virtualised and converged cloud environment by deploying VMWare’s vCenter Operations Manager to monitor the Vblock’s resource utilisation. He adds that “VMware’s Hyperic and Infrastructure Navigator has provided them with the visibility of virtual machine to application mapping as well as application performance monitoring, to give them the necessary correlation between applications, operating system, virtual machine and server…” It also offers them the visibility that has been so lacking.

Nigel Moulton and Archie Hendryx then concluded with their top five best practice virtualisation within a converged infrastructure tips:

1. If it’s successful and repeatable, then it’s worth standardising and automating because automation will enable you to make successful processes repeatable.

2.  Orchestrate it because even when a converged infrastructure is deployed there will still need to be changes that require rolling out; such as operating system updates, capacity changes, security events, load-balancing or application completions. These will all need to be placed in a certain order and you can automate the orchestration process.

3.  Simplify the front end by recognising that virtualisation has transformed your environment into a resource pool that end users should be able to request and provision for themselves and be consequently charged for. This may involve eliminating manual processes in favour of automated workflows, and simplification will enable a business to recognise the benefits of virtualisation.

4.  Manage and monitor: You can’t manage and monitor what you can’t see. For this reason VCE customers have an API that provides visibility and context to all of the individual components within a Vblock. They benefit from integration with VMware’s vCenter and vCenter Operations Manager and VCE’s API called Vision IOS. From these VCE’s customers gain visibility and the ability to immediately discover, identify and validate all of the components and firmware levels within the converged infrastructure as well as monitor its end-to-end health. This helps to eliminate any bottlenecks that might otherwise occur by allowing overly provisioned resources to be reclaimed.

5.  Create a software-defined datacentre: Utilise virtualisation to have a software-defined approach to your infrastructure that automatically responds to the needs of the application because it is ultimately managed and configured to support an application that in turn entails ‘compute’, storage, networking as well as databases and their interdependencies.

By following these tips and by talking with vendors such as VCE you will be able to increase the visibility that was once not available to your organisation and prevent disastrous errors from occurring.

So it’s clear – virtualisation is not all that you need. Other tools, such as the ones mentioned in this article, are required to ensure the smooth server and datacentre operation and performance. A hypervisor on its own will keep you in the dark. So to mitigate risk it’s worth considering an SDC and SDDC approach to catch errors before they run riot.

 

Source from: http://www.cloudcomputing-news.net

Leave a Comment

Previous post:

Next post: