There are plenty of articles, reviews, blogs and lab reports available that provide various comparisons of different software, hardware and architectural options for leveraging the benefits of server and storage virtualization.
I’m going to try to tackle the subject through the eyes of a “typical” IT director or manager at a small to mid size business (SMB) … the kind of user that we see a lot of here at Scale Computing, particularly since the launch of our HC3 completely integrated virtualization system that integrates high availability virtualization and storage technologies together into a single easy to manage system.
We find these users, or they find us, at various stages along their path to virtualization, from a room full of aging physical servers to a few VM’s running in isolated hypervisors or beyond. We find many sorting through expensive quotes and considering whether they are ready to jump “all in” for the very first time with a few beefy and expensive servers and a shared storage SAN. Common areas where they stumble include:
Most then find themselves worrying about how much they are going to have to pay for someone to set all this up, much less learn to manage all these new components and systems and keep things running, as well as expand it down the line.
But lets start with the simplest option: If you aren’t that worried about availability or uptime or trust that new servers and storage generally don’t fail – just buy the biggest server you can find for your immediate needs and throw everything you can into VMs that run on it … because that usually will be lowest on a cost / VM basis (there are exceptions, like very often multiple single socket servers will be cheaper and outperform a single larger machine… and at the same time can better lend them selves for availability as a bonus) … but one thing is going to be easier to manage, you will definitely know if it goes down and you have only a single point of failure vs. multiple. If the whole thing goes down you fix or replace whatever went bad, restore data from backups and you are hopefully back close to where you started, or at least your last good backup.
I can’t decide for you whether or not this is “good enough” for your company… you have to look at the probabilities of various failures vs. your complete cost of downtime including lost productivity, angry customers, lost revenue, etc. It’s also a legitimate factor to consider the impact on you and the IT staff whether that’s getting woken up and working sleepless nights to get things back up and running, canceling a vacation early or hunting for a new job if things go sideways. But with the combination of more workloads becoming critical around the clock combined with the prospect of virtualization consolidating many applications and components onto a single system creating a large single point of failure, this “all your eggs in one basket” approach quickly becomes unacceptable to most organizations.
My point in starting with that example is that before you even start looking at the “hypervisor” and differences there, the number one thing you need to do is evaluate your requirements for availability and uptime of the system, what are your needs and plans for growth, and what skills and resources are at your disposal to build, manage and maintain this system.
As far as hypervisors go, for the most part, low level hypervisors (not their management tools) are very similar and you can get free or low cost single server versions of VMware, Hyper-V or open source based solutions such as KVM that all leverage capabilities built into 64bit CPUs that now offload most of the work of virtualizing a CPU and memory among multiple VMs.
The big differences between VMware, Hyper-V, KVM or other solutions like Scale Computing’s HC3 come down to the REST of the solution built around the hypervisor, such as inclusion of integrated storage management, and the built in or add on management tools they offer. In some cases honestly things as simple as whether you love/hate Microsoft, EMC, open source or apple pie. Do you want to purchase pieces of a solution from different companies and integrate them together yourself or do you prefer a single company to call about an integrated solution?
Some companies primarily run Windows and prefer to get their hypervisor from Microsoft instead of introducing another vendor into the mix along with their application, OS and server vendor(s). Some hear that VMware “created” virtualization and is the 600 lb. gorilla and find that a comforting thought. Others use and prefer open source where available and like the assurance of a large community of support and that they can get access to all features and not worry about being surprised some day when the free version no longer includes some feature, or changes or imposes some new limitation to “cripple” it to force people to upgrade to paid software and support.
So what if, like most companies, you don’t want all your eggs in one basket and are interested in higher levels of availability or even just reducing recovery time after some type of failure, what options exist there? That will be covered in Part 2.
Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.×