This post is the first in a series of articles focusing on a great piece of hardware you may have seen in action at VMware Barcelona 2012 in the Solutions Exchange hall.
By the end of 2012 over 50% of the applications running on x86 platforms will be virtualized. However, currently only 20% of mission-critical applications have so far been virtualized.
Is it because IT departments do not trust virtualization platforms? Do they find virtualization platforms not stable enough to hold mission-critical applications?
Over the last decade, VMware has shown that virtualization is reality and actually virtualized applications are often more stable when running on trustworthy VMware platforms.
So if it is not a stability or trust issue, what’s the reason IT departments haven’t yet virtualized the last bit?
Scale-out aka scale horizontally means to add more nodes to the infrastructure, such as adding a new host to a VMware cluster.
As computer prices drop and performance continue to increase, low cost ‘commodity’ systems are the perfect fit for scale-out approach and can be configured in large clusters to aggregate compute power.
For the last seven years designing VMware virtual environments have been preaching for a scale-out approach. One could argue with that approach and as always it depends. Pro’s are low commodity hardware price and usually few virtual machines per host are impacted whenever the commodity hardware fails. On the other side, con’s are such design requires more VMware licensing, more datacenter footprint too and usually those low cost ‘commodity’ systems have small reservoir of resources.
To scale up aka to scale vertically means adding resources to a single host. Typically adding CPUs and memory to a single computer.
Usually that kind of host are beefier. They support 4-socket processors with up to 512GB of memory. Eventually you can see even beefier systems which support up to 8-socket processors and 1TB of memory. Some of us have been lucky enough to witness systems supporting up to 16-socket processors and 4TB of memory. No this is not a mainframe or such but x86 architecture based systems.
Moving to the so-called second wave of virtualization, that is providing the agility of virtualization to the business-critical applications are placing today’s Enterprise VMware clusters under enormous stress. Challenges are:
- Inadequate scaling of computer capabilities. Support of high demanding workloads is an issue with resource limited low cost ‘commodity’ systems.
- Insufficient reliability. Commodity hardware or hardware using ‘commodity’ components can be seen as less reliable. Reliability can be addressed with features I will talk about in the next articles.
- Increase management complexity and operating cost. It is easier to manage 100 hosts than 1000, and from that statement, managing 10 hosts is even easier than 100. Same goes for OPEX, 10 hosts cost much less to operate than 1000 hosts.
A scale-up approach fits perfectly business-critical applications requiring huge resources. Monster VM hellooo! Those power-hungry business-critical applications such large databases, huge ERP systems, big data analytics, JAVA based applications, etc. will directly benefit from a scale-up approach.
With the introduction of VMware vSphere 5, the amount of resources available on a single VM increased fourfold compared to previous version as shown on the picture below.
And lately with the release of VMware vSphere 5.1 the monster VM beefed up one more time.
For a vSphere 5.1 Monster VM to do any work the hypervisor will have to find and schedule 64 physical CPU cores. There are very few systems out there able to hold 64 cores and even less systems capable of doing 16-socket processors, 160 cores… Here is a hint, it starts with a B…
…To be continued!