Category: Industry Topics and News

The Rush to KVM in Hyperconvergence

The Rush to KVM in Hyperconvergence –
a.k.a.  AMC and building your car with your competition’s parts.

As an observation of the current move to KVM in the Hyperconvergence space by some vendors, I would put forward an analogy to the American Motors Company in the 1960’s and 1970’s.  AMC started life as an innovative independent company, but by the mid 1960’s had morphed into essentially a wrapper/ improved delivery mechanism for Chevrolet and Chrysler drive train parts – better looking and better executed cars than the competition. As the market for their products began to prove itself, Their main drivetrain suppliers (Chevy and Chrysler ) began to take notice, and they started slowing down and eventually closing the “spigot” for those core pieces around which AMC built their product while putting several of the ideas AMC had created into their own products. This left AMC in the unenviable (and analogously familiar) position of having empty shells of cars & needing in a hurry to re-engineer, design, and build their own. Now while they were able to come up with some fairly decent pieces, the damage was done and it did not end well for AMC – you haven’t seen many Javelins or AMX’s lately. This should start sounding very familiar…

Long story made short, it is always a really bad idea to build your entire business around what can, and inevitably will, become a competing product. The rush to KVM, when viewed through this lens, becomes all too clear. It  recasts many vSphere-centric Hyperconvergence companies as essential reboots with now weeks old version 1.5 products.

WATCH: Easy VM Creation on HC3

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

The VMware alternative – HC3

Earlier this year, Alex Barrett at TechTarget wrote an article about hyperconvergence, Hyperconvergence tackles storage and server strain. In it, she noted that hyperconvergence was a natural step beyond the converged infrastructure offerings that existed previously. Hyperconverged offerings are more than just a bundle of prepackaged components, bringing in additional technology to integrate the system.

But Alex never discusses just how deep this technology and integration have to go to benefit IT admins? Having now deployed more than 900 HC3 clusters at customer sites, we’ve learned a number of things over the last two years.

Most importantly: Owning the stack matters. Continue reading

Hypervisor Converged – Definitions Diverged

I read a recent blog post from VMware’s Chuck Hollis that attempted to define and compare the terms Converged Infrastructure vs. Hyper-Converged and Hypervisor-Converged Infrastructure.

As is usually the case, I thought Chuck did a great job cutting through vendor marketing to the key points and differences, beginning with his quote that “Simple is the new killer app in enterprise IT infrastructure.”  And we would echo that is even more true in the mid-sized and smaller IT shops that we focus on.

But eventually vendor biases do surface. Especially in an organization like VMware with many kinds of partners and a parent company with a hardware centric legacy. That’s not a bad thing, and I’m sure I will show my vendor bias as well.  At Scale Computing we focus exclusively on mid-size to small IT shops and are therefore biased to view the world of technology through their eyes.

As background, we would consider our HC3 products to be “hypervisor converged infrastructure” as our storage functionality is built in to the hypervisor which are both run in a single software OS kernel. I would consider hypervisor-converged to be a specific case or sub-set of hyper-converged, which simply means you have removed an external storage device and moved the physical disks into your compute hosts which also run the hypervisor code. (how the hypervisor gets to those disks is the difference in manageability, performance, complexity – see Craig Theriac’s blog series – The Infrastructure Convergence Continuum ) Continue reading

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – Reference Architecture (Part 2 of 4)

Infrastructure Convergence Continuum
Infrastructure Convergence Continuum

 

In our last post on the Infrastructure Convergence Continuum, we focused on the Build Your Own / DIY Architecture for virtualization infrastructure.  There are architectural limitations with this implementation that we addressed in the first post (“the inverted pyramid of doom”) that may be worth reviewing as a baseline understanding for today’s post.  Why? Spoiler Alert: They share the same architecture as the Reference Architecture and Converged Architecture we’ll be covering in today’s post. Continue reading

The Next-Generation Server Room

There was a recent article on Network Computing regarding the Next Generation Data Center that got me thinking about our SMB target customer and the next generation server room.  Both the enterprise and the SMB face the influx of traffic growth described in the article (clearly at different levels, but an influx none-the-less).  So how will the SMB cope?  How will an IT organization with the limited resources of time and money react?  By focusing on Simplicity in the infrastructure.

Elimination of Legacy Storage Protocols through Hypervisor Convergence

There is an ongoing trend to virtualize workloads in the SMB that traditionally meant adding a SAN or a NAS to provide shared storage for high availability.  With the introduction of Hypervisor Converged architectures through products like Scale’s HC3, that requirement no longer exists.  In this model, end users can take advantage of the benefits of high availability without the complexity that comes with legacy storage protocols like iSCSI or NFS.  Not only does this reduce the management overhead of the shared storage, it also simplifies the vendor support model dramatically.  In the event of an issue, a single vendor can be called for support with no ability to place the blame on another component in the stack.

Simplicity in Scaling

Moore’s Law continues to hold as better, faster and cheaper equipment becomes available year after year.  By implementing a scale-out architecture in the infrastructure, IT organizations can take advantage of this by purchasing what they need today knowing that they can purchase equipment at tomorrow’s prices to scale-out the resources when the need arises.  Combined with the ability to mix and match hardware types in a hypervisor converged model also means that users have granularity in their scaling to match the requirements of the workloads at that time (such as adding a storage-only node in HC3 to a compute cluster to scale out only the storage resources). Continue reading

The vCenter Paradox

A few days ago, when I was preparing to update a HC3 cluster, I had an combination epiphany and flashback. I was thinking about a previous life as a VMware administrator, and the memories of the long nights, complexities, and gray hair that came about whenever an ESX update was required. The epiphany I had about updating a Scale cluster was simple: I don’t have to worry about ensuring that my vCenter host is updated and able to cope with the changes as well. Lets face it, its not impossible to manage an ESX cluster without vCenter, but without it, the principal beauty of ESX, or any clustered virtualization solution, is greatly minimized.

If you think about it, any OS, whether its your phone, your DVR, or your infrastructure servers, need to be updated. Maybe its a security vulnerability, maybe its a new feature, or maybe it is just a new UI, but updates are a fact of life in IT. Windows updates, prior to my tenure managing an ESX environment, were the bane of my existence. Another function of coping with those updates is dealing with the likelihood that a reboot/restart/powercycle is required; some type of outage is likely going to be required.

When you think about vCenter, the requirements for managing your manager of managers (yes, I think I said that right), become even more paramount. Not just because you have to ensure that your virtual host, should you be virtualizing your vCenter server VM, is updated in the proper order so your vCenter instance is safe, but because you have to deal with everything else that vCenter depends on. Some type of database is required, as well as a fairly specific set of patches or versions for those databases. Beyond that, other basic necessities including DNS, time and AD (if you’re using SSO), need to be cared for and properly handled during an update.

Just as much planning, caring, and feeing is given to managing your vCenter Server(s), as is managing your ESX cluster. You haven’t even touched the rest of the environment, the storage, or the network.

That’s just to update it or manage it; I didn’t even reach back into my memory to think about the installation…

How is this making my life easy again?

So, after shaking off the cold sweat of that nightmare, I was staring at a browser, a single interface that didn’t have a complicated set of APIs and backing database to ensure were all up and running.

Scale’s UI: nothing to install or set up, no database to configure, no client software. It is built-in, and accessible anywhere.

As I was preparing to update the cluster, I don’t have to take a maintenance window or notify my users, these updates are rolling…even if a node in the cluster needs a reboot, HyperCore distributes the running VMs elsewhere in the cluster. Zero-downtime updates.

That’s making my life easy.

Who is Scale Computing? Storage Field Day 5 – SFD5

A couple of weeks ago Scale Computing was honored to take part in Storage Field Day 5, hosting 12 delegates at our offices in San Mateo for a 2 hour session covering both our company and our HC3 product. We always enjoy hearing the opinion of passionate technologists and this event exceeded our expectations for engagement both in the room and online.

In the video below, our very own Jason Collier (Twitter: @bocanuts) walked through a brief overview of who we are as a company and why we focus on the SMB market with our HC3 product. If you have questions that weren’t covered in the video, feel free to reach out to us.

Follow up blog from Justin Warren: Scale Computing in the Goldilocks Zone

 

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – DIY Architecture (Part 1 of 4)

Converging the hardware and software components needed in an SMB virtualization deployment is a hot trend in the industry.  Terms like “converged infrastructure”, “hyper-convergence”, “hypervisor convergence” and “software-defined (fill in the blank)” have all emerged alongside the trend and just as quickly as they were defined, most have lost their meaning from both overuse and misuse.

In this series of blog posts, we will attempt to re-establish these definitions within the framework of the Convergence Continuum below:

Infrastructure Convergence Continuum

 Before we address convergence though, let’s set the stage by describing the traditional model of creating a virtualization environment with high availability.

Build Your Own / DIY

This is typically made up of VMware or Hyper-V plus brand name servers (Dell, HP, IBM, etc.) acting as hosts and a SAN or NAS (EMC VNXe, Dell Equallogic, HP Lefthand, NetApp, etc.) networked together to provide redundancy.  The DIY architecture is tried and true and effectively offers all of the benefits of single server virtualization such as partitioning, isolation, encapsulation and hardware independence along with High Availability of VMs, when architected correctly.  An example architecture might look like:

DIY Virtualization Architecture
DIY Virtualization Architecture

The downside to this approach is that it is complex to implement and manage.  Each layer in the stack adds an added management requirement (virtualization management, SAN/NAS management and Networking Management) as well as an additional vendor in the support environment, which often leads to finger pointing without strict adherence to the hardware compatibility list of each company.  This complexity is a burden for those who implement a DIY environment, as it often requires specialized training in one or more of the layers involved.  The IT generalist in the mid-market targeted by Scale Computing often relies on a Value Added Reseller to implement and help manage such a solution, which adds to the overall cost of implementing and maintaining.

Monolithic Storage – Single Point of Failure

The architecture above relies on multiple servers and hypervisors having the ability to share a common storage system, which makes that system a critical single point of failure for the entire infrastructure. This is commonly referred to in the industry as 3-2-1 architecture with 1 representing the single shared storage system that all servers and VM’s depend on (also called the inverted pyramid of doom).  While “scale-out” storage systems have been available to distribute storage processing and redundancy across multiple independent “nodes”, the hardware cost and additional networking required for scale out storage architectures originally restricted these solutions to very selected applications.

Down the Path of Convergence

Now that we have the basics of the DIY Architecture down, we can now continue down the path of convergence to Reference Architectures and Converged Solutions, which we will define in our next post.  Stay tuned for more!

 

Scale’s HC3 through the lens of a VMware Administrator with David Davis

Recently, I sat down with @davidmdavis of www.virtualizationsoftware.com to discuss Scale’s HC3 and the general trend of Hypervisor Convergence.  David kept the perspective of a VMware administrator coming to HC3 for the first time, which allowed me to highlight the simplicity of HC3 compared to a traditional VMware virtualization deployment.  Hope you enjoy!

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×