Tag Archives: VMware

Hypervisor Converged – Definitions Diverged

I read a recent blog post from VMware’s Chuck Hollis that attempted to define and compare the terms Converged Infrastructure vs. Hyper-Converged and Hypervisor-Converged Infrastructure.

As is usually the case, I thought Chuck did a great job cutting through vendor marketing to the key points and differences, beginning with his quote that “Simple is the new killer app in enterprise IT infrastructure.”  And we would echo that is even more true in the mid-sized and smaller IT shops that we focus on.

But eventually vendor biases do surface. Especially in an organization like VMware with many kinds of partners and a parent company with a hardware centric legacy. That’s not a bad thing, and I’m sure I will show my vendor bias as well.  At Scale Computing we focus exclusively on mid-size to small IT shops and are therefore biased to view the world of technology through their eyes.

As background, we would consider our HC3 products to be “hypervisor converged infrastructure” as our storage functionality is built in to the hypervisor which are both run in a single software OS kernel. I would consider hypervisor-converged to be a specific case or sub-set of hyper-converged, which simply means you have removed an external storage device and moved the physical disks into your compute hosts which also run the hypervisor code. (how the hypervisor gets to those disks is the difference in manageability, performance, complexity – see Craig Theriac’s blog series – The Infrastructure Convergence Continuum ) Continue reading

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – DIY Architecture (Part 1 of 4)

Converging the hardware and software components needed in an SMB virtualization deployment is a hot trend in the industry.  Terms like “converged infrastructure”, “hyper-convergence”, “hypervisor convergence” and “software-defined (fill in the blank)” have all emerged alongside the trend and just as quickly as they were defined, most have lost their meaning from both overuse and misuse.

In this series of blog posts, we will attempt to re-establish these definitions within the framework of the Convergence Continuum below:

Infrastructure Convergence Continuum

 Before we address convergence though, let’s set the stage by describing the traditional model of creating a virtualization environment with high availability.

Build Your Own / DIY

This is typically made up of VMware or Hyper-V plus brand name servers (Dell, HP, IBM, etc.) acting as hosts and a SAN or NAS (EMC VNXe, Dell Equallogic, HP Lefthand, NetApp, etc.) networked together to provide redundancy.  The DIY architecture is tried and true and effectively offers all of the benefits of single server virtualization such as partitioning, isolation, encapsulation and hardware independence along with High Availability of VMs, when architected correctly.  An example architecture might look like:

DIY Virtualization Architecture
DIY Virtualization Architecture

The downside to this approach is that it is complex to implement and manage.  Each layer in the stack adds an added management requirement (virtualization management, SAN/NAS management and Networking Management) as well as an additional vendor in the support environment, which often leads to finger pointing without strict adherence to the hardware compatibility list of each company.  This complexity is a burden for those who implement a DIY environment, as it often requires specialized training in one or more of the layers involved.  The IT generalist in the mid-market targeted by Scale Computing often relies on a Value Added Reseller to implement and help manage such a solution, which adds to the overall cost of implementing and maintaining.

Monolithic Storage – Single Point of Failure

The architecture above relies on multiple servers and hypervisors having the ability to share a common storage system, which makes that system a critical single point of failure for the entire infrastructure. This is commonly referred to in the industry as 3-2-1 architecture with 1 representing the single shared storage system that all servers and VM’s depend on (also called the inverted pyramid of doom).  While “scale-out” storage systems have been available to distribute storage processing and redundancy across multiple independent “nodes”, the hardware cost and additional networking required for scale out storage architectures originally restricted these solutions to very selected applications.

Down the Path of Convergence

Now that we have the basics of the DIY Architecture down, we can now continue down the path of convergence to Reference Architectures and Converged Solutions, which we will define in our next post.  Stay tuned for more!

 

HC3 for existing VMware users

This past week I have been at the MidMarket CIO Forum in beautiful Ponte Vedra Beach, FL.  It is a fun event with very intimate boardroom style sessions that give vendors a chance to sit down with CIOs to discuss industry trends, their current problems and our potential solutions.  In each of our sessions there was at least one CIO saying something along the lines of, “I see the value of HC3, but I have already invested in VMware.  How can you work in my environment?”  This is usually in a lamenting tone after we have described the added cost that they likely paid for the licensing, implementation, hardware (SAN/NAS), and training associated with a traditional Do-it-Yourself virtualization deployment.  (Side story…one potential customer told us his woes of sending a sysadmin off to a week long VMware training only to have him return to leave for another job weeks later.  Ouch!).

Since it came up so often, I thought a quick blog post was warranted in case there are others out there asking the same question.

VMware customers coming to HC3 for their Primary Infrastructure

Many customers come to us having used VMware in the past.  Most have implemented VMware in the traditional “inverted pyramid of doom” style (to steal a spiceworks-ism) with a handful of host servers connected down to shared storage through redundant switches.  Often they come to us when it is time to refresh either a SAN or NAS or when looking to add a new host into their environment (which can push their VMware licensing cost up significantly as they jump from Essentials Plus or another 3 host package into a more enterprise license).  When we talk to potential customers in this situation, it is not uncommon to hear things like “For the price of replacing my SAN, I could have an entire HC3 cluster?” or “For the price of just the licensing, I can put in a new HC3 system?”.  There are several examples of this in our customer success stories that I recommend reading through if interested.

VMware customers purchasing HC3 as a Disaster Recovery Site

Customers who have already made a heavy investment in VMware for their primary site, but still want to take advantage of the simplicity and affordability of HC3 still have an option.  Instead of purchasing and implementing the same VMware environment that they have in place at their primary site, this group of users can implement an HC3 system along side HC3 Availability to replicate data from their primary site to the HC3 system.  In the event of a failure at the primary site, HC3 Availability will detect the failure and can automatically  (or manually if you’d rather) bring up those VMs on HC3.  Here is a video of Dave Demlow walking through the HC3 Availability product which demonstrates the failover process from VMware to HC3:

We have admittedly seen this approach act as a “trojan horse” where users begin with HC3 as a DR target, but fall in love with the simplicity of adding new highly available VMs.  At the time of that next server/SAN refresh cycle, those customers often replace their primary site with HC3 as well.

If you have any questions on making the jump from VMware to HC3, please feel free to reach out to us for more information.

Scale’s HC3 through the lens of a VMware Administrator with David Davis

Recently, I sat down with @davidmdavis of www.virtualizationsoftware.com to discuss Scale’s HC3 and the general trend of Hypervisor Convergence.  David kept the perspective of a VMware administrator coming to HC3 for the first time, which allowed me to highlight the simplicity of HC3 compared to a traditional VMware virtualization deployment.  Hope you enjoy!

A Move from VMware to HC3

Many of Scale’s HC3 customers are coming to us from a traditional Do-It-Yourself virtualization environment where they combined piecemeal parts including VMware’s hypervisor to create a complex solution that provides the high availability expected in their infrastructure.  Fed up with the complexity (or more often the vTax on a licensing renewal) associated with that setup, they eventually find HC3 as a solution to provide the simplicity, scalability and high availability needed at an affordable price.

I just returned from the Midmarket CIO Forum last week where 98% of the CIOs I spoke to had implemented some form of the VMware environment described above (the other 2% were Hyper-V, but the story of vTax still rang true!).  We met with 7 boardrooms full of CIOs who all reacted the same to the demo of HC3: “This sounds too good to be true!”  To which I like to reply, “Yeah, we get that a lot.” 🙂

After the initial shock of seeing HC3 for the first time, pragmatism inevitably takes over.  The questions then became, “How do I migrate from VMware to HC3?” or “How can I use HC3 alongside my existing VMware environment?”   I spent the majority of my week talking through the transition strategies we have seen from some of the 600+ HC3 customers when migrating from VMware to HC3 VMs (V2V process). Continue reading

Five Business Reasons Why Developers and Software Ecosystems Benefit from KVM

By: Peter Fuller, Vice President of Business Development and Alliances, Scale Computing

As the VP of Business Development and Alliances for Open Virtual Alliance Member Scale Computing, I work with a diverse group of top players in the software ecosystem. While many have KVM compatible products as full virtual appliances, others are building business cases to justify the minor engineering expense required to develop KVM-compatible versions of their VMware, Citrix or Hyper-V solutions.

This KVM question has isochronously emerged as a discussion point with my business development peers this year. It is not a hard apologetic to form since KVM support is: 1) adopted, 2) supported and crowd sourced, 3) independent, 4) a quickly profitable engineering exercise and 5) freely available.

Let’s take a quick look at the benefits:

(1) KVM is Adopted & Mature

KVM (Kernel-based Virtual Machine) works in the Linux kernel as an open source, free component for Linux on x86 hardware that contains Intel VT or AMD-V extensions. With KVM, multiple unmodified Linux or Windows images can run as virtual machines on a single processor.

KVM is growing at 60% year over year in terms of new server shipments virtualized, with over 100,000 shipments and nonpaid deployments worldwide over the past 12 quarters.1 The worldwide virtual-machine software market was on track to grow to over $3.6 billion in 2012, up from $3.0 billion the year before, a 19.3% year-over-year growth.2

KVM is also the standard for OpenStack. In fact, 71% of OpenStack deployments use KVM.

The technology is also very mature. According to CloudPro, KVM held the top 7 SPECVirt benchmarks, outperforming VMware across 2, 4 and 8 socket servers. As CloudPro mentions, it is very rare that an open source solutions meets so many commercial specifications.3

(2) KVM is Supported & Crowd Sourced

Both IBM and Red Hat announced significant investments in KVM. Unlike VMware, the many results of those investments won’t be locked behind intellectual property laws. The companies are contributing much of its KVM development to the open source community.

This investment was important for Scale, not because we use Red Hat branches of KVM, but because it will undoubtedly attract publishers into the technology and legitimized it as an enterprise-class hypervisor.

The growing ecosystem of KVM supporters is proof. The OVA has over 300 members of software ad hardware vendors, and continues to add to its ranks daily. This collective pool of companies contributes code back to the community, allowing each company indirect access to each other’s open development initiatives. Hundreds of thousands of non-member Linux developers also add to the crowd-sourced technologies that companies like Scale can use. Additionally, the Linux Foundation recently announced that the OVA would become an official collaborative project.

Ecosystem developers benefit from this crowd-sourced adoption of KVM in ways they can’t leverage with commercial solutions like VMware. For starters, commercial virtualization solutions are

(3) KVM is Independent & Adaptive

The independence of KVM contributes to fecundity of its code. Hundreds of thousands of Linux developers around the world develop technologies for Linux and KVM—without restrictions associated with corporate IP protection.

While the permanency of any company is in continual state of ambiguity, corporations are far more labile than un-owned open source code. KVM will be around forever; there’s little risk supporting it.

The biggest challenges to the viability of some hypervisor providers are the open source headwinds wreaking havoc on their financial models. Specialized vendors like VMware don’t have the product diversity outside of their hypervisor that cushion companies like Microsoft and Citrix. As the hypervisor becomes a commodity, revenues are made on the management tools and licensed annually. This stress already pushed VMware to compete with its partners. Just this year, the company released a V-SAN product in direct completion to Nutanix and Simplivity.

(4) KVM is Easily Convertible & Supporting it is Profitable

I like to use a basic supply and demand argument support KVM development: while there’s an infinite supply of a vendor’s code, there will always be a finite supply of a customer’s cash.

To save that finite cash pool, roughly 70 percent of corporations use KVM as a secondary hypervisor to avoid licensing costs for non-production virtual machines. This install base represents a huge market that is quickly migrating KVM to the primary position in order to reduce recurring licensing costs.

Converting is Easy

In most cases, converting from a mainstream hypervisor to KVM is relatively simple. In fact, one of our alliance partners added KVM support to its robust backup software in just a week. The conversion from VMDK to QCOW2 (KVM) is fairly straightforward.

(5) The Hypervisor is a Commodity, Why Pay for It?

Hypervisors are a commodity. With Intel’s VT and AMD’s V chipset, KVM calls directly into the virtualization stack provided by those manufacturers at the chip level. There’s no need to pay license charges for solutions that use software to perform the virtualization tasks Intel and AMD provide in the hardware. A light kernel-based piece of code calling directly into the processor greatly increases the speed and efficiency of the virtualization experience. Additionally, since both Intel and AMD are committed to open technologies and the leverage publishers will get from these two companies is significant.

Conclusion

For ecosystem developers, the value extracted from the community translates into engineering efficiencies, faster feature development and flexibility, potentially millions of dollars in savings on engineering costs, and the ability to maintain price elasticity in a highly competitive ecosystem.

KVM has a large install base, major investors, commercial momentum and crowd-sourced development momentum. Spending a few weeks to add KVM support to existing applications will open new markets for developers while opening the door to new found capital efficiencies and faster development times.

______________

1IDC Worldwide Quarterly Server Virtualization Tracker, March 2013

2Worldwide Virtual Machine Software 2012-2016 Forecast, IDC #235379, June 2012

3 http://www.cloudpro.co.uk/iaas/virtualization/5278/kvm-should-it-be-ignored-hypervisor-alternative/page/0/1

HC3 vs VMware vs. Hyper-V for SMBs : Part 1

There are plenty of articles, reviews, blogs and lab reports available that provide various comparisons of different software, hardware and architectural options for leveraging the benefits of server and storage virtualization.

I’m going to try to tackle the subject through the eyes of a “typical” IT director or manager at a small to mid size business (SMB)  … the kind of user that we see a lot of here at Scale Computing, particularly since the launch of our HC3 completely integrated virtualization system that integrates high availability virtualization and storage technologies together into a single easy to manage system. Continue reading

Don’t Believe Them – Scale Computing’s HC3 is not a cheaper solution that is less capable

I have heard something out in the market a few times lately, something that really bothers me. What I’ve heard is a new way for our competitors to try to marginalize us with our customers. It goes something like this:  “Scale is a great solution if you don’t have much budget for virtualization. But if you do have the budget, you should go for the ‘premium solution’ from the name brand vendors.” I.e. traditional servers + SAN + storage switching + virtualization software suite. We usually hear HP, Dell, IBM or even Cisco servers along with EMC, Netapp, or other storage along with VMware. Continue reading

VMware is Dead

We recently presented at an analyst-centric conference in which the lead-in to our presentation was “VMware is dead. Storage is dead.”  We certainly drew some inquisitive looks from the audience. But as we explained HC3 and the underlying technology, the puzzled looks turned into nods of agreement.

Some of the latest buzz has centered around the “software-defined datacenter” which is an extension of software-defined networking that has made its way into software-defined storage and software-defined servers – all three of which culminate in the software-defined datacenter.  In the end, it’s all about the promise of making infrastructure easy to deploy and manage. Continue reading