All posts by Alan Conboy

5 Things to Think about with Hyperconverged Infrastructure

1. Simplicity

A Hyperconverged infrastructure (or HCI) should take no more than 30 minutes to go from out of the box to creating VM’s. Likewise, an HCI should not require that the systems admin be a VCP, a CCNE, and a SNIA certified storage administrator to effectively manage it. Any properly designed HCI should be able to be administered by an average windows admin with nearly no additional training needed. It should be so easy that even a four-year-old should be able to use it…

2. VSA vs. HES

In many cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see HCI vendors choosing to simply virtualize a SAN controller into each node in their architectures and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IO’s having to pass multiple times through VMs in the system and adjacent systems. This approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) consume so much CPU and RAM that they redefine inefficient – especially in the mid-market. In one case I can think of, a VSA running on each server (or node) in a vendor’s architecture BEGINS its RAM consumption at 16GB and 8 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do. With a different vendor, the VSA reserves around 50GB RAM per node on their entry point offering, and over 100GB of RAM per node on their most common platform – a 3 node cluster reserving over 300 GB RAM just for IO path overheadAn average SMB to mid-market customer could run their entire operation in just the CPU and RAM resources these VSA’s consume.

There is a better alternative called the HES approach. It eliminates the dedicated servers, storage protocol overhead, resource consumption, multi-layer object files, filesystem nesting, and associated gear by moving the hypervisor directly into the OS of a clustered platform as a set of kernel modules with the block level storage function residing alongside the kernel in userspace, completely eliminating the SAN and storage protocols (not just virtualizing them and replicating copies of them over and over on each node in the platform). This approach simplifies the architecture dramatically while regaining the efficiency originally promised by Virtualization.

3. Stack Owners vs. Stack Dependents

Any proper HCI should not be stack dependent on another company for it’s code. To be efficient, self-aware, self-healing, and self-load balancing, the architecture needs to be holistically implemented rather than piecemealed together by using different bits from different vendors. By being a stack owner, an HCI vendor is able to do things that weren’t feasible or realistic with legacy virtualization approaches. Things like hot and rolling firmware updates at every level, 100% tested rates on firmware vs customer configurations, 100% backwards and forwards compatibility between different hardware platforms – that list goes on for quite a while.

4. Using Flash Properly Instead of as a Buffer

Several HCI vendors are using SSD and Flash only (or almost only) as a cache buffer to hide the very slow IO path’s they have chosen to build based on VSAs and Erasure Coding (formerly known as software RAID 5/6/X) used between Virtual Machines and their underlying disks – creating what amounts to a Rube Goldberg machine for an IO path (one that consumes 4 to 10 disk IO’s or more for every IO the VM needs done) rather than using Flash and SSD as proper tiers with an AI based heat mapping and QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers and dynamically allocate flash as needed on the fly to workloads that demand it (up to putting the entire workload in flash). Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of Flash or Solid State. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

5. Future Proofing Against the “Refresh All, Every 5 Years” Spiral

Proper HCI implements self-aware bi-directional live migration across dissimilar hardware. This means that the administrator is not boat anchored to a technology “point in time of acquisition”, but rather, they can avoid over buying on the front end, and take full advantage of Moore’s law and technical advances as they come and the need arises. As lower latency and higher performance technology comes to the masses, attaching it to an efficient software stack is crucial in eliminating the need the “throw away and start over ” refresh cycle every few years.

*Bonus* 6. Price 

Hyperconvergence shouldn’t come at a 1600+% price premium over the cost of the hardware it runs on. Hyperconvergence should be affordable – more so than the legacy approach was and VSA based approach is by far.

These are just a few points to keep in mind as you investigate which Hyperconverged platform is right for your needs

The Origin of Modern Hyperconvergence

Several years ago (in the waning days of the last decade and early days of this one), we here at Scale decided to revolutionize how datacenters for the SMB and Mid Market should function. In the spirit of “perfection is not attained when there is nothing left to add, but rather when there is nothing left to remove”, we set out to take a clean sheet of paper approach to how highly available virtualization SHOULD work. We started by asking a simple question – If you were to design, from the ground up, a virtual infrastructure, would it look even remotely like the servers plus switches plus SAN plus hypervisor plus management beast known as the inverted pyramid of doom? The answer, of course, was no, it would not. In that legacy approach, each piece exists as an answer/band-aid/patch to the problems inherent in the previous iteration of virtualization, resulting in a Rube-Goldbergian machine of cost and complexity that took inefficiency to an entirely new level.

There had to be a better way. What if we were to eliminate the SAN entirely, but maintain the flexibility it provided in the first place (enabling high availability)? What if we were to eliminate the management servers entirely by making the servers (or nodes) talk directly to each other? What if we were to base the entire concept around a self aware, self healing, self load balancing cluster of commodity X64 server nodes? What if we were to take the resource and efficiency gains made in this approach and put them directly into running workloads instead of overhead thereby significantly improving density while lowering cost dramatically? We sharpened our pencils and got to work. The end result was our HC3 platform.

Now, at this same time, a few other companies were working on things that were superficially similar, but designed to tackle an entirely different problem. These other companies set out to be a “better delivery mechanism for VMWare in the large enterprise environment”. They did this by taking the legacy solution SAN component and virtualizing an instance of SAN (storage protocols, CPU and RAM resource consumption and all) as a virtual machine running on each and every server in their environment. The name they used for this across the industry was “Server SAN”.

Server SAN, while an improvement in some ways over the legacy approach to virtualization, was hardly what we here at Scale had created. What we had done was the elimination of all those pieces of overhead. We had actually converged the entire environment by collapsing those old legacy stacks (not virtualizing them and replicating them over and over). Server San just didn’t describe what we do. In an effort to create a proper name for what we had created, we took some of our early HC3 Clusters to Arun Taneja and the Taneja group back in 2011 and walked them through our technology. After many hours in that meeting with their team and ours, the old networking term “Hyperconverged” was resurrected specifically to describe Scale’s HC3 platform – the actual convergence of all of the stacks (storage, compute, virtualization, orchestration, self-healing, management, et.al.) and elimination of everything that didn’t need to be there in the legacy approach to virtualization, rather than the semi-converged approach that the Server San vendors had taken.

Like everything else in this business, the term caught fire, and it’s actual meaning became obscured through it’s being co-opted by a multiplicity of other vendors stretching it to fit their products – I am fairly sure I saw a “hyperconverged” coffee maker the other week, but now you know where the term actually came from and what it really means from the people that coined it’s modern use in the first place

The VSA is the Ugly Result of Legacy Vendor Lock-Out

VMWare and Hyper-V with the traditional Servers+Switches+SAN architecture –  widely adopted by enterprise and the large mid-market – works. It works relatively well, but it is complex (many moving parts, usually from different vendors), necessitates multiple layers of management (server, switch, SAN, hypervisor), and requires the use of storage protocols to be functional at all.  Historically speaking, this has led to either the requirement of many people from several different IT disciplines to effectively virtualize and manage a VMWare/Hyper-V based environment effectively, or to smaller companies taking a pass on virtualization as the soft and hard costs associated with it put HA virtualization out of reach.

legacy

With the advent of Hyperconvergence in the modern datacenter, HCI vendors had a limited set of options when it came to the shared storage part of the equation. Lacking access to the VMKernel and NTOS kernel, they could only either virtualize the entire SAN and run instances of it as a VM on each node in the HCI architecture (horribly inefficient), or move to hypervisors that aren’t from VMWare or Microsoft. The first choice is what most took, even though it has a very high cost in terms of resource efficiency and IO path complexity as well as nearly doubling the hardware requirements of the architecture to run it. They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access. Likewise, they found this approach (known as VSA or Virtual SAN Appliance) to be easier than tackling the truly difficult job of building an entire architecture from the ground up, clean sheet style.

The VSA approach – virtualize the SAN and its controllers – also known as pulling the SAN into the servers. The VSA or Virtual San Appliance approach was developed to move the SAN up into the host servers through the use of a virtual machine on each box. This did in fact simplify things like implementation and management by eliminating the separate physical SAN (but not its resource requirements, storage protocols, or overhead – in all actuality, it reduplicates those bits of overhead on every node, turning one SAN into 3 or 4 or more). However, it didn’t do much to simplify the data path.  In fact, quite the opposite. It complicated the path to disk by turning the IO path from:

application->RAM->disk

into :

application->RAM->hypervisor->RAM->SAN controller VM->RAM-> hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk->network to next node->RAM->hypervisor->RAM->SAN controller VM->RAM->hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk.

This approach uses so much resource that one could run an entire SMB to MidMarket datacenter on just the CPU and RAM being allocated to these VSA’s

VSA

This “stack dependent” approach did, in fact, speed up the time-to-market equation for the HCI vendors that implement it, but due to the extra hardware requirements, extra burden of the IO path, and use of SSD/flash primarily as a caching mechanism for the now tortured IO path in use, this approach still brought a solution in at a price point and complexity level out of reach of the modern SMB.

HCI done the right way – HES

The right way to do an HCI architecture is to take the exact opposite path than all of the VSA based vendors. From a design perspective, the goal of eliminating the dedicated servers, storage protocol overhead, resources consumed, and associated gear is met by moving the hypervisor directly into the OS of a clustered platform that runs storage directly in userspace adjacent to the kernel (known as HES or in-kernel).  This leverages direct I/O, thereby simplifying the architecture dramatically while regaining the efficiency originally promised by virtualization.

scribe

This approach turns the IO path back into :

application -> RAM -> disk -> backplane -> disk

This complete stack owner approach, in addition to regaining the efficiency promised by HCI, allows for features and functionalities (that historically had to be provided by third parties in the legacy and VSA approaches) to be built directly into the platform, allowing for true single vendor solutions to be implemented and radically simplifying the SMB/SME data center at all levels – lower cost of acquisition, lower total TCO. This makes HCI affordable and approachable to the SMB and Mid-Market. This eliminates the extra hardware requirements, the overhead of SAN, and the overhead of storage protocols and re-serialization of IO. This returns efficiency to the datacenter.

When the IO Path is compared side by side, the differences  in the overhead and the efficiency become obvious, and the penalties and pain caused by legacy vendor lock-in start to really stand out, with VSA based approaches (in a basic 3 node implementation) using as much as 24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters.

diff

VMWare has passed “Peak vSphere” – especially for the SMB

VMWare’s 2015 earnings call from late January of 2016 was enlightening, to say the least. The main take away from it for me was VMWare’s admission that the traditional server virtualization platform – vSphere based on servers plus switches plus SAN – has passed it’s peak and is significantly in decline from a new sales and license renewal perspective.  Quoting Pat Gelsinger (CEO VMWare) – “it’s been a big strategy of ours to sell no more naked vSphere

Here are a few quick quotes from that investor call that provide significant insight:

“we are seeing strong growth across our full portfolio of emerging products. We’ve recognized that our blockbuster compute products are reaching maturity and will represent a decreasing portion of our business going forward”

Translation – We have passed the peak of the legacy vSphere based  approach and our newest bits – vSAN, NSX, and airwatch are what is providing our growth

“all of the top ten deals contained EUC, nine of the top ten deals contained NSX and six of our top ten deals contained vSAN”

Translation – The hyperconverged approach of eliminating the SAN by moving its functions into the kernel along with network functions, etc. are our path forward.

“Compute license bookings declined in the low-double digits year-over-year and management license bookings grew in the low-teens year-over-year in Q4. Our newer high growth SDDC product, which includes NSX and vSAN grew license bookings robustly. “

Translation – Due to companies discovering alternate paths to modern compute virtualization, our vSphere license bookings are down by double digits. Only those offerings that are closer to a Hyperconverged approach are showing significant growth

“for a number of years we’ve been expanding our product portfolio outside of compute, and it’s been a big strategy of ours to sell no more naked vSphere as we got into these new products and we’ve talked about a transition that would happen. It wasn’t a matter of, it was a matter of it, it was really about when we’d start to see our newer products offset some of the decline we saw in our compute business.”

“the ongoing deceleration standalone vSphere was a little bit more pronounced in Q4 “

Translation – The servers+switches+SAN+hypervisor market approach is dying. The stand alone vSphere market is dying, we have to move to the new hyperconverged future and leave the past behind

It is rare to see such a frank set of comments in an earnings call. It is also rare for a company to come right out and say that they are actively moving away from what has been their flagship product in such an abrupt manner. For the SMB, the lesson is this – The old infrastructure approach is not surviving moving forward, and it is time to modernize into a hyperconverged infrastructure.

http://seekingalpha.com/article/3836736-vmware-vmw-ceo-pat-gelsinger-q4-2015-results-earnings-call-transcript?page=3

 

Regaining Efficiency in the modern datacenter

At the start of the virtual revolution, efficiency was the essential driver that made admins revisit how datacenters were architected. The ability to make a business run with two or three servers instead of ten or twenty was something of a sea change. The ability to spin up new workloads in minutes instead of days was both profound and immediately impactful, showing  how business could be done moving into the 21st century .  The efficiency gained by running many workloads in parallel on the same resources (rather than waste CPU, disk, and RAM that sat idle on single application server boxes) brought a fundamental change to the data center. Suddenly, CapEx and OpEx  expenditures could be tamed in the client/server x86 world without resorting to old school big iron (mainframes).  An organic change to how x86 servers could be implemented

Enter the Virtual Server: 

This was all very good stuff, but brought with it it’s own suite of new problems. One of which (but a biggie) -when a physical server died, it no longer took out just a database, or just a file server, but rather took out several. Exchange, sql, file and print, and an app server or two likely went with it (and stayed gone until repairs to the server could be effected). All this was caused by using the disks internal to the server to house all of the VM’s virtual hard drives. A solution to this need for shareable block level storage had to be found before the next major steps in the Virtual revolution could take place.

Enter the SAN – 

The SAN brought flexibility and portability to the virtual infrastructure. By moving the virtual hard drives off of the internal disks and out to a network shared RAID based enclosure,  workloads were able to be quickly restarted in the face of hardware faults. However, it did this at a cost.  A cost in complexity and overhead. Introducing the network as a carrier for block level disk IO to external disk enclosures brought with it the overhead of TCP (or FCp),  the overhead of new storage protocols (iSCSI and FC) or new uses for old storage protocols (NFS), the addition of new moving parts  such as cabling ,  dedicated switching, and the complexity of networking and security to support running your entire IO path over a network, all layered on top of levels of RAID penalty. An organic solution, and one that lost the efficiency,  but covered the basics of restoring availability when a box died.

Enter the Killer App

These series of organic changes/solutions to the problems presented also enabled what could arguably be called the virtualization “killer app” – Live Migration of a running virtual server between physical boxes. A capability historically only provided by mainframes. Killer App indeed – it drove the virtual revolution out of the shadows and to the forefront of datacenter implementations for the enterprise.

The problem with this servers plus switches plus San plus storage protocols plus virtualization software approach lay in the tortured, organically grown (rather than purpose built) architectural approach used to get there. While it worked, the cost and complexity of the solution left it unapproachable for a majority of the SMB market. The benefits of live migration and fault tolerance were huge, but the “Rube Goldberg-ian” machine used to get there redefined complexity and cost in the data center.

Enter the Clean Sheet Approach

Clearly, it had become time to rethink the approach to virtualization. Time to look at the goals of the approach, but eliminate the inefficiencies introduced through organic problem solving by taking a “Clean Sheet” approach to how high availability could be obtained without the efficiency losses to complexity, cost and advanced training that made the now “legacy” virtualization approach unreachable for many in the marketplace.

Hyperconvergence

Two different schools of thought emerged on how best to simplify the architecture while maintaining the benefits of virtualization.

The VSA/Controller VM approach –  Simply virtualize the SAN and it’s controllers – also known as pulling the SAN into the servers. The VSA or Virtual San Appliance approach was developed to move the SAN up into the host servers through the use of a virtual machine. This did in fact simplify things like implementation and management by eliminating the separate SAN. However, it didn’t do much to simplify the data path or regain efficiency. The VSA consumed significant host resources (CPU and RAM), still used storage protocols, and complicated the path to disk by turning the IO path from application->RAM->Disk into application->RAM->hypervisor->RAM->SAN controller VM->RAM->hypervisor->RAM->write-cache SSD->Disk. Often, this approach uses so much resource that one could run an entire SMB  datacenter on just the CPU and RAM being allocated to these VSA’s. For the SMB, this approach tends to lack the efficiency that the sub 100 VM data center really needs.

vsa data path

The HES (Hypervisor Embedded Storage) clustered approach – Eliminate the dedicated servers, storage protocol overhead, resources consumption, multi-layer object files and filesystem nesting, and associated gear by moving the hypervisor directly into the OS  of a clustered storage platform as a set of kernel modules, thereby simplifying the architecture dramatically while regaining the efficiency originally promised by Virtualization.

native

It is up to you to decide which approach is the most efficient for your individual needs based on the requirements of your data center. While the first approach can bring some features that might make sense to the admin used to having to deal with an enterprise oriented (read tons) of servers, switches, and multiple big-iron SAN implementations, it does so at a resource cost that just doesn’t make sense to the efficiency minded SMB and Mid-Market systems administrator that has to do way too many other things to need to worry about complexity in his architecture.

Welcome to the In(kernel) Club

Back in 2008, we here at Scale started cutting the path to build a better virtualization and storage infrastructure for the SMB and mid-market. We decided almost immediately that taking the VSA style approach of virtualizing a SAN as a VM was a shortcut at best, and a hack at worst – certainly  not the most efficient use of compute resources, so we decided to build the HC3 foundation the hard but right way – by moving everything in-kernel.

As the years have rolled forward, and other HCI vendors have come and gone, we now find VMWare with VX:Rail taking the same path that we have blazed with in-kernel storage for hyperconverged infrastructure. While ours is different in execution (HES in userspace) I would like to congratulate the VMWare team on their choice to join us in affirming that in-kernel is the right way to do storage for the HCI space and with a combined customer base of around 5000 in-kernel customers, welcome them to the In(kernel) Club.

 

Defining Efficiency in the modern datacenter

The business dictionary defines efficiency as the comparison of what is actually produced or performed with what can be achieved with the same consumption of resources (money, time, labor, design, etc.) – Example being : The designers needed to revise the product specifications as the complexity of its parts reduced the efficiency of the product.

In technology today we constantly hear efficiency being used as a marketing term by folks that haven’t ever actually looked under the hood of the technology in question to the design of how architectures actually work. Efficiency is constantly being used as a buzz word without real evaluation of its meaning in the context of how the product in question really does what it was intended to do, compared to the alternatives in the market – way too many vendors saying “trust us, ours is the most efficient…”

Sadly, a quote from Paul Graham all too often comes to mind when dealing with vendors and their architectural choices in new and rapidly growing market segments such as Hyperconvergence:

“In a rapidly growing market, you don’t worry too much about efficiency. It’s more important to grow fast. If there’s some mundane problem getting in your way, and there’s a simple solution that’s somewhat expensive, just take it and get on with more important things.”

Understand that when the technology at hand is a Hyperconverged infrastructure (HCI), the importance of this term “Efficiency” cannot be overstated. Bear in mind that what a hyperconverged vendor (or cloud vendor, or converged vendor) is actually doing is taking responsibility for all of the architectural decisions and choices that their customers would have made for themselves in the past.  All of the decisions around which parts to use in the solution, how many parts there should be, how best to utilize them, what capabilities the end product should have, and what the end user should be able to do (and what – in their opinion- they don’t need to be able to do) with the solution.

Likewise, these choices made in the different approaches you see in the HCI market today can have profound effects on the efficiency (and resulting cost) of the end product. All too often, shortcuts are taken that radically decrease efficiency in the name of expediency (See the Paul Graham quote above). Like cloud and flash before it, the HCI space is seen as a ‘land grab’ by several vendors, and getting to market trumps how they get there. Those fundamental decisions do not have priority (with some vendors) over getting their sales and marketing machines moving.

The IO Path

One great example of technology moving forward is SSD and flash technologies. Used properly, they can radically improve performance and reduce power consumption. However, several HCI vendors are using SSD and flash as an essential buffer to hide very slow IO paths used between Virtual Machines, VSA’s (just another VM),  and their underlying disks – creating what amounts to a Rube Goldberg machine for an IO path – (one that consumes 4 to 10 disk IOs or more for every IO the VM needs done) rather than using flash and SSD as proper tiers with a QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers. Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of flash or solid state. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

Disk Controller Approaches

In other cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see vendors choosing to simply virtualize a SAN controller and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IOs having to pass multiple times through VMs in the system and adjacent systems, maintaining and magnifying the overhead of storage protocols (and their foibles). Likewise, this approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) often consumes significant resources in terms of CPU and RAM that could easily otherwise power additional Virtual Machines in the architecture.  This has been done by many vendors due to the forced lock-in and lack of flexibility caused by legacy virtualization approaches. Essentially, the VSA is a shortcut solution to the ‘legacy of lock-in’ problem. In one case I can think of, a VSA running on each server (or node) in a vendor’s architecture BEGINS its RAM consumption at 16GB and 4 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do (see above on IO path). With a different vendor, the VSA reserves over 43GB per node on their entry point offering, and over 100GB of RAM per node on their most common platform – a 3 node cluster reserving 300 GB RAM just for IO path overhead.  An average SMB customer could run their entire operation in just the CPU and RAM resources these VSAs consume. While this approach may offer benefits to the large enterprise that may not miss the consumed resource due to features offered in the VSA outweighing the loss, which is very much not the case with the average SMB customer.  Again, not a paragon of the efficiency required in today’s SMB and mid-market IT environments.

This discussion on efficiency can be extended through every single part of a vendor’s architectural and business choices – from hardware, hypervisor, management, disk, to the way they run their business. All of these choices result in dramatic impacts on the resultant capabilities, performance, cost and usability of the final product. As technology consumers in the modern SMB datacenter, we need to look beyond the marketeering to truly vet the choices being made for us.

The really disheartening part is when the vendors in question choose to either hide their choices, bury them on overly technical manuals, or claim that a given ‘new’ technical approach is a panacea for every use case without mentioning that the ‘new’ technical approach is really just a marketing renamed (read buzzword) approach that is just a slightly modified version of what has been around for decades and is already widely known to only be appropriate only for very specific uses. Even worse, they simply come out of the gate with the arrogance of “Trust us, our choices are best.”

At the end of the day, it is always best to think of efficiency first, in all the areas it touches within your environment and the proposed architecture. Think of it for what it really is, how it impacts use and ask your vendor the specifics. You may well be surprised by the answers you get to the real questions….

 

The Rush to KVM in Hyperconvergence

The Rush to KVM in Hyperconvergence –
a.k.a.  AMC and building your car with your competition’s parts.

As an observation of the current move to KVM in the Hyperconvergence space by some vendors, I would put forward an analogy to the American Motors Company in the 1960’s and 1970’s.  AMC started life as an innovative independent company, but by the mid 1960’s had morphed into essentially a wrapper/ improved delivery mechanism for Chevrolet and Chrysler drive train parts – better looking and better executed cars than the competition. As the market for their products began to prove itself, Their main drivetrain suppliers (Chevy and Chrysler ) began to take notice, and they started slowing down and eventually closing the “spigot” for those core pieces around which AMC built their product while putting several of the ideas AMC had created into their own products. This left AMC in the unenviable (and analogously familiar) position of having empty shells of cars & needing in a hurry to re-engineer, design, and build their own. Now while they were able to come up with some fairly decent pieces, the damage was done and it did not end well for AMC – you haven’t seen many Javelins or AMX’s lately. This should start sounding very familiar…

Long story made short, it is always a really bad idea to build your entire business around what can, and inevitably will, become a competing product. The rush to KVM, when viewed through this lens, becomes all too clear. It  recasts many vSphere-centric Hyperconvergence companies as essential reboots with now weeks old version 1.5 products.

WATCH: Easy VM Creation on HC3

Ending the insanity with “The Pencil”

Awhile back, I was asked to have a discussion with someone that was looking into virtualization and SAN storage to build out a traditional virtual environment. The customer – an Equallogic fan – had already settled on a VMWare and Equallogic solution, but was still willing to talk. He had sent us the following excerpt from an email:

“I will be looking feature-wise at snapshot features and integration with Windows and VMWare. I’m sure Jack filled you in so there isn’t a need to belabor the point but I was extremely impressed with features from other vendors and honestly disappointed by the snapshot features on the Scale platform. But I will happily give you another chance to show me something I may have missed.”

This customer’s take was based on demonstrations from Lefthand and his own experience on his Equallogic gear that showed him outstanding integration with Windows and VSS for snapshots as well as tight ESX storage API integration – creating snaps that are application-aware and consistent, creating and managing iSCSI LUNs through vSphere, these sorts of things. He had already mentally accepted that the complexity of such a solution was a foregone conclusion and a necessary evil to realize the benefits of virtualization.

It was time to show him that there is a better way …. Continue reading

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×