Hypervisor Converged – Definitions Diverged

I read a recent blog post from VMware’s Chuck Hollis that attempted to define and compare the terms Converged Infrastructure vs. Hyper-Converged and Hypervisor-Converged Infrastructure.

As is usually the case, I thought Chuck did a great job cutting through vendor marketing to the key points and differences, beginning with his quote that “Simple is the new killer app in enterprise IT infrastructure.”  And we would echo that is even more true in the mid-sized and smaller IT shops that we focus on.

But eventually vendor biases do surface. Especially in an organization like VMware with many kinds of partners and a parent company with a hardware centric legacy. That’s not a bad thing, and I’m sure I will show my vendor bias as well.  At Scale Computing we focus exclusively on mid-size to small IT shops and are therefore biased to view the world of technology through their eyes.

As background, we would consider our HC3 products to be “hypervisor converged infrastructure” as our storage functionality is built in to the hypervisor which are both run in a single software OS kernel. I would consider hypervisor-converged to be a specific case or sub-set of hyper-converged, which simply means you have removed an external storage device and moved the physical disks into your compute hosts which also run the hypervisor code. (how the hypervisor gets to those disks is the difference in manageability, performance, complexity – see Craig Theriac’s blog series – The Infrastructure Convergence Continuum )

The first statement where I detected this vendor bias from Chuck is the following when discussing Hyper-Converged Infrastructure. As background many/most alternative products take SAN or NAS software and run it as a storage VM or VSA aka. virtual storage appliance on top of the hypervisor. The hypervisor connects to that using traditional SAN or NAS network storage protocols like iSCSI or NFS.

Chuck said:

<<For each, there’s only one primary storage choice: the software stack that is provided by the vendor — no external storage as you’d find using a hypervisor-based approach.>>

Back to eliminating complexity – Aren’t both of those things HOW you go about eliminating complexity? The way you simplify is by eliminating separate components and choices like external storage or even what “software stack” you want to use for storage. Those are unnecessary choices, eliminating them is a positive thing.

However, maybe what Chuck is highlighting in this case the fact that you could even consider making a choice is simply evidence that the core storage functionality really isn’t converged into the hypervisor. It’s a bolt on / replaceable “software stack” with it’s own management, use of storage protocols to plug in to the hypervisor and UI components to plug in to the virtualization management system (even it’s own operating system to update). To which Chuck points out, <<And there’s only so much you can do with plug-ins :)>> So True! So I think on that we do agree that true storage convergence into the hypervisor IS the ultimate goal. AND that most systems claiming to be hyper-converged simply because they move their storage stack into a virtual machine vs. a separate box fall short.

This leads us to hypervisor-converged infrastructure where the hypervisor actually has control full control over storage directly and can determine things like optimal data placement across systems and disks for redundancy and performance, track shared / redundant blocks across multiple VM’s and snapshots, etc.  The hypervisor is actually aware of real spinning disks, not just some “logical disk” (LUN) where those details are hidden behind some other abstraction handled by some other system.

When hypervisor-convergence is accomplished there is no “storage stack” per se to chose, storage functionality becomes native and fully integrated into the hypervisor and made available directly to virtual machines. There is just one stack to update, there are no plug-in’s, storage protocols or independent management of storage “systems”, and therefore no storage stack decisions to make.

On that topic, it at first appears this is another area where we agree:

<<because it can abstract all the underlying resources, the hypervisor can converge the operational management experience in a way that’s almost impossible in other ways … As a result, you’re now operationally managing the hypervisor’s abstractions, and not the underlying subsystems.>>

That sounds great… let the hypervisor access and pool all infrastructure resources together and users should only need to provision VM’s with the resources they need. No underlying subsystems to worry about or manage!

But wait …then Chuck says the following

<<To be clear, “hypervisor-converged” doesn’t mean all the functionality is provided by the hypervisor alone — just that functionality is abstracted — although software-based functionality will certainly be an option for some.>>

And then he uses vSphere VVols feature (currently in beta) as an example.   VVols does create an additional abstraction for connecting external storage subsystems to a vSphere Hypervisor that can simplify some aspects of storage provisioning in a VMware environment. However, not only do storage protocols like iSCSI and NFS not go away, VVols does not, to use Chuck’s exact words, relieve you from managing the underlying subsystem. You are still going to go to the storage array management tools when you need to add a new shelf of disks and make them available for VM use, replace a failed disk, change RAID levels or cache settings or “manage the subsystem”. You will still have storage networking between your virtualization hosts and your “storage subsystem”. This is not hyper-convergence or hypervisor-convergence. This is the “old way” with a new abstraction layered above the old abstractions…. Basically a more integrated “plug-in” for external storage subsystems.

If anything, VVols is another example of loose VCE type of “management convergence” … to a certain degree, trying to make the independent storage subsystem more virtualization aware while allowing the virtualization layer to request basic external storage services in a more automated manner. That is certainly not a bad thing if you want to own and manage an external storage subsystem and storage network but it is no more converged than putting multiple vendors systems in the same rack with some automation and orchestration tools.

Trying to call connecting physical servers running vSphere to physical external storage systems from 3rd party vendors “converged” because you may have simplified some provisioning tasks is pure marketing, and dangerously close to the kind of marketing that is designed to create market confusion vs. helping customers understand relevant differences.

And about those 3rd party external storage subsystems:

<<all those choices can create complexity, especially with regards to sizing, procurement, integration, support etc. We’ve already seen some “appliance-ization” of hypervisor-converged infrastructure … and I think it’s safe to say we’ll see a lot more before too long.>>

We of course agree! And I’m sure those who witnessed the recent serious issues with controller queue depth on controllers that were VMware HCL certified and available in many supposedly well tested “VSAN ready nodes” can relate to the complexity that comes with selecting your own combination of hardware, from multiple vendors, and adequately testing it under real-world load conditions.

Then the software company bias seems to come out and he goes on to say that many might want appliances combined with “roll-your-own” hardware but lets think about that. Other than as a short term stop-gap or transition strategy, why would anyone WANT to do both? Given all the great arguments he put forth it seems ultimately everything should move to highly standardized building block infrastructure, whether in the form of fully supported single vendor appliances or in some large enterprise / web-scale cases companies will standardize on their own “appliance” like stack that they build and test themselves. But other than short term transition, there seems no good reason to do both as a strategy.  How many companies really want to be integrators and use their own dollars to assemble, test, update, support and troubleshoot various hardware and software pieces they “roll on their own.”  How many would prefer that someone else to do that for them and have a single vendor to call for help?

The answer seems obvious.

“Simple is the new killer app in enterprise IT infrastructure”

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.