Tag Archives: KVM

Scale’s HC3 through the lens of a VMware Administrator with David Davis

Recently, I sat down with @davidmdavis of www.virtualizationsoftware.com to discuss Scale’s HC3 and the general trend of Hypervisor Convergence.  David kept the perspective of a VMware administrator coming to HC3 for the first time, which allowed me to highlight the simplicity of HC3 compared to a traditional VMware virtualization deployment.  Hope you enjoy!

Five Business Reasons Why Developers and Software Ecosystems Benefit from KVM

By: Peter Fuller, Vice President of Business Development and Alliances, Scale Computing

As the VP of Business Development and Alliances for Open Virtual Alliance Member Scale Computing, I work with a diverse group of top players in the software ecosystem. While many have KVM compatible products as full virtual appliances, others are building business cases to justify the minor engineering expense required to develop KVM-compatible versions of their VMware, Citrix or Hyper-V solutions.

This KVM question has isochronously emerged as a discussion point with my business development peers this year. It is not a hard apologetic to form since KVM support is: 1) adopted, 2) supported and crowd sourced, 3) independent, 4) a quickly profitable engineering exercise and 5) freely available.

Let’s take a quick look at the benefits:

(1) KVM is Adopted & Mature

KVM (Kernel-based Virtual Machine) works in the Linux kernel as an open source, free component for Linux on x86 hardware that contains Intel VT or AMD-V extensions. With KVM, multiple unmodified Linux or Windows images can run as virtual machines on a single processor.

KVM is growing at 60% year over year in terms of new server shipments virtualized, with over 100,000 shipments and nonpaid deployments worldwide over the past 12 quarters.1 The worldwide virtual-machine software market was on track to grow to over $3.6 billion in 2012, up from $3.0 billion the year before, a 19.3% year-over-year growth.2

KVM is also the standard for OpenStack. In fact, 71% of OpenStack deployments use KVM.

The technology is also very mature. According to CloudPro, KVM held the top 7 SPECVirt benchmarks, outperforming VMware across 2, 4 and 8 socket servers. As CloudPro mentions, it is very rare that an open source solutions meets so many commercial specifications.3

(2) KVM is Supported & Crowd Sourced

Both IBM and Red Hat announced significant investments in KVM. Unlike VMware, the many results of those investments won’t be locked behind intellectual property laws. The companies are contributing much of its KVM development to the open source community.

This investment was important for Scale, not because we use Red Hat branches of KVM, but because it will undoubtedly attract publishers into the technology and legitimized it as an enterprise-class hypervisor.

The growing ecosystem of KVM supporters is proof. The OVA has over 300 members of software ad hardware vendors, and continues to add to its ranks daily. This collective pool of companies contributes code back to the community, allowing each company indirect access to each other’s open development initiatives. Hundreds of thousands of non-member Linux developers also add to the crowd-sourced technologies that companies like Scale can use. Additionally, the Linux Foundation recently announced that the OVA would become an official collaborative project.

Ecosystem developers benefit from this crowd-sourced adoption of KVM in ways they can’t leverage with commercial solutions like VMware. For starters, commercial virtualization solutions are

(3) KVM is Independent & Adaptive

The independence of KVM contributes to fecundity of its code. Hundreds of thousands of Linux developers around the world develop technologies for Linux and KVM—without restrictions associated with corporate IP protection.

While the permanency of any company is in continual state of ambiguity, corporations are far more labile than un-owned open source code. KVM will be around forever; there’s little risk supporting it.

The biggest challenges to the viability of some hypervisor providers are the open source headwinds wreaking havoc on their financial models. Specialized vendors like VMware don’t have the product diversity outside of their hypervisor that cushion companies like Microsoft and Citrix. As the hypervisor becomes a commodity, revenues are made on the management tools and licensed annually. This stress already pushed VMware to compete with its partners. Just this year, the company released a V-SAN product in direct completion to Nutanix and Simplivity.

(4) KVM is Easily Convertible & Supporting it is Profitable

I like to use a basic supply and demand argument support KVM development: while there’s an infinite supply of a vendor’s code, there will always be a finite supply of a customer’s cash.

To save that finite cash pool, roughly 70 percent of corporations use KVM as a secondary hypervisor to avoid licensing costs for non-production virtual machines. This install base represents a huge market that is quickly migrating KVM to the primary position in order to reduce recurring licensing costs.

Converting is Easy

In most cases, converting from a mainstream hypervisor to KVM is relatively simple. In fact, one of our alliance partners added KVM support to its robust backup software in just a week. The conversion from VMDK to QCOW2 (KVM) is fairly straightforward.

(5) The Hypervisor is a Commodity, Why Pay for It?

Hypervisors are a commodity. With Intel’s VT and AMD’s V chipset, KVM calls directly into the virtualization stack provided by those manufacturers at the chip level. There’s no need to pay license charges for solutions that use software to perform the virtualization tasks Intel and AMD provide in the hardware. A light kernel-based piece of code calling directly into the processor greatly increases the speed and efficiency of the virtualization experience. Additionally, since both Intel and AMD are committed to open technologies and the leverage publishers will get from these two companies is significant.

Conclusion

For ecosystem developers, the value extracted from the community translates into engineering efficiencies, faster feature development and flexibility, potentially millions of dollars in savings on engineering costs, and the ability to maintain price elasticity in a highly competitive ecosystem.

KVM has a large install base, major investors, commercial momentum and crowd-sourced development momentum. Spending a few weeks to add KVM support to existing applications will open new markets for developers while opening the door to new found capital efficiencies and faster development times.

______________

1IDC Worldwide Quarterly Server Virtualization Tracker, March 2013

2Worldwide Virtual Machine Software 2012-2016 Forecast, IDC #235379, June 2012

3 http://www.cloudpro.co.uk/iaas/virtualization/5278/kvm-should-it-be-ignored-hypervisor-alternative/page/0/1

HC3 vs VMware vs. Hyper-V for SMBs : Part 1

There are plenty of articles, reviews, blogs and lab reports available that provide various comparisons of different software, hardware and architectural options for leveraging the benefits of server and storage virtualization.

I’m going to try to tackle the subject through the eyes of a “typical” IT director or manager at a small to mid size business (SMB)  … the kind of user that we see a lot of here at Scale Computing, particularly since the launch of our HC3 completely integrated virtualization system that integrates high availability virtualization and storage technologies together into a single easy to manage system. Continue reading

HC3 Under the Hood: Virtual Hard Disks and CD/DVD Images

In the last post of this series, we walked through the process of creating a VM on the HC3 platform.  In just a few clicks, you create a virtual machine container and allocate CPU, memory, and storage resources to that VM. When you start the VM and install your operating system and applications, that storage is presented as virtual hard disks and will appear like a virtual c:\ or d:\ drive to your applications.

HC3 Virtual Disks in Windows Device Manager
HC3 Virtual Disks in Windows Device Manager

Continue reading

HC3 Under the Hood: Creating a VM

In the last post of this series, we talked about how multiple independent HC3 nodes are joined together into a cluster that is managed like a single system with a single pool of storage and compute resources, as well as built-in redundancy for high availability.

For a quick review, the end-user view of this process is as simple as racking the nodes, connecting them to power and your network, giving them IP addresses, and assigning them to join a cluster.

You might expect after that there should be any number of steps required to configure individual disks into RAID arrays and spares. Then provision storage targets, sharing protocols and security, both physically and logically connect each shared storage resource to each compute resource with multiple redundant paths. Ultimately, configure a hypervisor to use that raw storage to create partitions and file systems to store individual data objects, such as virtual disks. Those would be the next steps with virtually ANY other system available.

Well, you don’t have to do any of that with HC3 because the storage layer is fully integrated with the compute hardware and virtualization software layers – all managed by the system.  Ah, management. So maybe now it’s time to install and configure a separate VM management server and management client software on your workstation to oversee all the virtualization hosts and software? Again, not with HC3 since the management layer is built-in and accessible simply by pointing your web browser to the cluster and logging in.

With HC3, you go right from configuring each node as a member of the HC3 system to pointing a web browser to HC3. And in a few clicks, you have created your first HC3 virtual machine.

This is definitely something that is best to see with your own eyes (or better yet, ask us for a demo and we will let YOU drive!). The HC3 system in this video already has a number of VMs running, but the process you will see here is exactly the same for the very first VM you create.

Creating a virtual machine is a simple process that is accomplished through the Scale Computing HC3 Manager web browser user interface. Selecting the ‘Create’ option from the ‘Virtualization’ tab allows the user to specify required and optional parameters for the virtual machine including:

• Number of virtual CPU cores

• RAM

• Number and size of virtual disks to create; and to

• Attach or upload a virtual DVD/CD ISO image for installing an operating system.

Creating the virtual machine not only persists those VM configuration parameters that will later tell the hypervisor how to create the virtual machine container when it is started, but it also physically creates files using the distributed storage pool that will contain the virtual hard disks to present to the VM once it is started. For maximum flexibility, we create those files inside a default storage pool container called VM to present a file/folder structure for organizing virtual machine files.  HC3 Virtual Machines are able to access their virtual hard disk files directly as if they are local disks, without the use of any SAN or NAS protocols, and can access those virtual disks from any node of the HC3 cluster – which is the key to capabilities like VM Live Migration and VM failover from node to node.

In the next post, we will dig into how HC3 virtual disks files are actually stored. As well as how you can optionally use the external storage protocol capabilities of HC3 to access and browse HC3 storage pools for VMs and ISO media image from remote machines.

 

Is VMware Headed for the Le Brea Tar Pits?

I recently received an inbound call from a value-added reseller looking for virtualization solutions for his SMB customers. The conversation began as they normally do: he heard something bout Scale Computing and our technology, but really didn’t understand what we were doing. He said—and I quote:

“It looks like you’ve virtualized all the core functions in the rack: servers, storage, networking. But that’s not really possible. So what is it you do?”

With a smile on my face that he couldn’t see, I simply replied “yes.” Continue reading

Convergence Inflexibility or Virtualization Inception

In one of the more recent blog posts from my colleague entitled KVM or VMware: Why KVM is Right for the Times (Part 1 of 2) these technical reasons for choosing KVM were given:

  1. Native Support for any Guest OS
  2. Efficient Code and Better Performance
  3. Open Source and Flexible

The first two reasons don’t need much explanation, but most people only read “Open Source” in the third reason and move on.  I want to dig a bit deeper into the flexibility aspect, or specifically why other methods of convergence are inflexible and wasting your resources.  Sherman, set the Way Back Machine to 1998. Continue reading

Under the Hood – HC3 Architectural Design Goals

The first two posts of this series discussed the high availability and ease of use requirements that went into the design of HC3.  With those overarching user needs as a backdrop, we will now transition into a more technical look under the hood at the hardware and software aspects of the HC3 system.

HC3 and the ICOS (Intelligent Clustered Operating System) that it runs on were designed to put intelligence and automation into the software layer allowing the system to provide advanced functionality, flexibility and scalability using low cost hardware components, including the virtualization capabilities built into modern CPU architectures.  Rather than “scaling up” with larger, more expensive hardware that also requires equally expensive idle “standby capacity” to operate in the event of a failure, HC3 was designed to aggregate compute and storage resources from multiple systems into a single logical system with redundancy and availability designed in. Continue reading

KVM or VMware: Why KVM is Right for the Times (Part 2 of 2)

The Holy Grail of IT, especially SMB IT, is to have a datacenter that’s simple and easy to use. Preferably, as easy as an iPhone or Android phone with enterprise software that is as easy to access and run as Angry Birds is from an app store.

Thanks to KVM and Scale Computing, that vision is possible. And, well on its way.

This blog continues where Part 1 left off and describes how KVM is implemented within Scale’s HC3.

Strategic for the Customer and Scale Computing

Both commercial developers and consumers need to worry about the EMC trap. The company owns VMware and is well-known in the industry for its aggressive business moves. VMware storage partners who are developing converged solutions based on VMware are tying their company’s future to their competition. It’s quite probable that EMC could create a hyperconverged version of VMware that only runs on EMC storage gear. All other vendors could be locked out, severely limiting choices for vendors, resellers and especially users. Continue reading

KVM or VMware: Why KVM is Right for the Times (Part 1 of 2)

 

The Holy Grail of IT, especially SMB IT, is to have a datacenter that’s simple and easy to use. Preferably, as easy as an iPhone or an Android with enterprise software just as easy to get and run as Angry Birds is from an app store.

Thanks to KVM and Scale Computing, that vision is possible. And, well on its way.

In August 2012, Scale Computing launched the first and only hyperconverged infrastructure based on KVM (Kernel-based Virtual Machine) into the marketplace. Called HC3, the multi-award winning solution integrates servers, storage and networking into a clustered appliance with a single operating system called ICOS® (Intelligent Clustered Operating System). Continue reading