The Origin of Modern Hyperconvergence

Several years ago (in the waning days of the last decade and early days of this one), we here at Scale decided to revolutionize how datacenters for the SMB and Mid Market should function. In the spirit of “perfection is not attained when there is nothing left to add, but rather when there is nothing left to remove”, we set out to take a clean sheet of paper approach to how highly available virtualization SHOULD work. We started by asking a simple question – If you were to design, from the ground up, a virtual infrastructure, would it look even remotely like the servers plus switches plus SAN plus hypervisor plus management beast known as the inverted pyramid of doom? The answer, of course, was no, it would not. In that legacy approach, each piece exists as an answer/band-aid/patch to the problems inherent in the previous iteration of virtualization, resulting in a Rube-Goldbergian machine of cost and complexity that took inefficiency to an entirely new level.

There had to be a better way. What if we were to eliminate the SAN entirely, but maintain the flexibility it provided in the first place (enabling high availability)? What if we were to eliminate the management servers entirely by making the servers (or nodes) talk directly to each other? What if we were to base the entire concept around a self aware, self healing, self load balancing cluster of commodity X64 server nodes? What if we were to take the resource and efficiency gains made in this approach and put them directly into running workloads instead of overhead thereby significantly improving density while lowering cost dramatically? We sharpened our pencils and got to work. The end result was our HC3 platform.

Now, at this same time, a few other companies were working on things that were superficially similar, but designed to tackle an entirely different problem. These other companies set out to be a “better delivery mechanism for VMWare in the large enterprise environment”. They did this by taking the legacy solution SAN component and virtualizing an instance of SAN (storage protocols, CPU and RAM resource consumption and all) as a virtual machine running on each and every server in their environment. The name they used for this across the industry was “Server SAN”.

Server SAN, while an improvement in some ways over the legacy approach to virtualization, was hardly what we here at Scale had created. What we had done was the elimination of all those pieces of overhead. We had actually converged the entire environment by collapsing those old legacy stacks (not virtualizing them and replicating them over and over). Server San just didn’t describe what we do. In an effort to create a proper name for what we had created, we took some of our early HC3 Clusters to Arun Taneja and the Taneja group back in 2011 and walked them through our technology. After many hours in that meeting with their team and ours, the old networking term “Hyperconverged” was resurrected specifically to describe Scale’s HC3 platform – the actual convergence of all of the stacks (storage, compute, virtualization, orchestration, self-healing, management, et.al.) and elimination of everything that didn’t need to be there in the legacy approach to virtualization, rather than the semi-converged approach that the Server San vendors had taken.

Like everything else in this business, the term caught fire, and it’s actual meaning became obscured through it’s being co-opted by a multiplicity of other vendors stretching it to fit their products – I am fairly sure I saw a “hyperconverged” coffee maker the other week, but now you know where the term actually came from and what it really means from the people that coined it’s modern use in the first place

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×