Tag Archives: UnderTheHood

HyperCore v5 – A Closer Look at Snapshots and Cloning

Now that we have moved HyperCore v5 to General Availability, let’s dive into some of the new features that are now available.

  • VM-level Snapshots – Near-instant VM-level snapshots with no disruption at the time of the snapshot, no duplication of data, and no performance degradation even with thousands of snapshots per VM (> 5,000 supported per VM!). A snapshot can be simply “cloned” to start a VM while still maintaining the integrity of other snapshots both upstream and downstream from the cloned snapshot.
  • VM “Thin” Cloning – Enables the user to take a space efficient approach to cloning a VM. Thin clones are immediately bootable, and are similarly unduplicated using the Allocate-on-Write technology.

I write about these features together because they rely on the same underlying awesomeness built into the SCRIBE storage layer (Scale Computing Reliable Independent Block Engine).  SCRIBE employs an Allocate-on-Write technique for snapshotting data.

Allocate-on-Write explained: At the time of a snapshot, SCRIBE updates the metadata to mark the blocks as being referenced by the snapshot (no change to the underlying data, so near instant).  Then as changes to the snapshotted data are written, SCRIBE allocates new space within the cluster, writes the data and then updates the metadata of the original data.  This eliminates the overhead penalty of the three step process (read, rewrite, then write) normally associated with a Copy-on-Write approach to snapshots. 

To restore a snapshot, you simply clone it.  This creates an immediately bootable, “thin” clone of the VM, meaning that you can instantly spin it up and consume no more space than the original VM.  The unique blocks are tracked for each clone, so this is extremely space efficient.  Whether it is a single VM or 100+ clones of that VM, the storage is thinly provisioned and only consumes the space of the original VM until you begin making changes to the individual VMs.

Cloning from a Template

We find that many users make template VMs.  They take a Windows VM, customize it for their business with their specific tool set, etc., patch it and then sys prep it (Windows tool for preparing a master Windows VM to shorten the setup with just user specific settings going forward).  When it comes time to spin up a new VM, they can then clone that template VM and have an immediately bootable clone ready to go.  And going back to the “thin” clone concept…the common OS blocks are only stored once among all of the cloned VMs.

Cloning for Test/Dev

Another great use of the cloning technology is for testing changes to your production VMs in an isolated environment.   Users take advantage of this for testing patches/making changes by cloning their running VMs (yes you can clone a running VM) and booting them in a “lab” network by either assigning a specific lab related VLAN tag or by disconnecting the NIC ahead of booting up the VM.  Then they can apply and test out changes to their VMs knowing that there will be no impact to their production environment.

In the videos below, we demonstrate these features in more detail.  Please let us know if there are any questions we can answer about this new functionality.  Look for more updates related to the other new features coming soon to a blog post near you.

How to take a snapshot on HC3

How to clone a VM on HC3

 

HyperCore v5 –A Closer Look at One Click Rolling Upgrades

As noted in a previous post, HyperCore v5 is now Generally Available and shipping on all HC3 node types. In this “A Closer Look…” series of blog posts we’ll be going through the majority of these new features in more detail. Today’s topic…One Click Rolling Upgrades:

  • Non-Disruptive / Rolling Updates ‒ HC3 clusters can be upgraded with no downtime or maintenance window. Workloads are automatically live-migrated across the HC3 appliance to allow for node upgrades, even if such an upgrade requires a node-reboot. Workloads are then returned to the node after the upgrade is complete.

 

Included in HyperCore v5 is our one click rolling upgrade feature…and our customers love it! Customers covered under ScaleCare – our 24×7 support offering that covers both hardware and software – are alerted of HyperCore updates as they become generally available via the user interface. There is nothing more to license when new updates become available, which means that as new features are developed for HyperCore, our current users can take full advantage of them.

When a user kicks off an upgrade, this sets into motion a series of events that updates the nodes in the cluster while keeping the VMs up and running throughout the process. The upgrade starts with a single node by live migrating the VMs from that node to the other nodes in the cluster. Keep in mind that the best practice is to keep enough available resources available to tolerate a node failure in your environment. This same concept holds true for rolling upgrades and users are alerted if they do not meet this condition (and prevented from upgrading if true).

Insufficient Free Memory

After the VMs are live migrated off of the first node, the full OS stack is updated, rebooting the node if required. Once that node is brought back online and has rejoined the cluster, the VMs are then returned to their original position and the upgrade process moves on to node 2 repeating the process. This continues through each node in the cluster until the system in its entirety is running the latest code. No VM downtime, no maintenance window required.

 

Scale Computing ICOS 4.2: Support for VLAN tagging

I am excited to announce that Scale has officially moved ICOS 4.2 out of beta and into limited availability (meaning that our support team can upgrade customers for use in a production environment)!  The theme for this release was more advanced networking functionality and included features such as:

  • Support for VLAN tagging
  • Support for adding multiple network interface cards (NICs) to VMs
  • Connect or disconnect network interface cards (NICs) on VMs

In the video below, I walk through the simple setup of a VM to VM private network which highlights these features.

For more information on this release, please see the release notes which can be found on the partner portal and customer portal.  If you have any questions or would like to see a demo of this new functionality, please give us a call!

HC3 Under The Hood: Integrated Scale-Out Storage Pool – Data Mirroring and Striping

In this post, we will dive into the HC3 distributed storage layer and detail how data is stored across all the physical disks in the cluster, providing data redundancy should a disk drive fail and aggregating the I/O performance of all the drives in the system.

HC3 treats all storage in the cluster as a single logical pool for management and scalability purposes, the real magic comes in how data blocks are stored redundantly across the cluster to maximize availability as well as performance. Continue reading

HC3 Under The Hood: Virtual Machine Placement and Failover

A few posts ago we walked through the process of creating a new VM on HC3. One thing you may have noticed is that nowhere in the process did we specify a physical location for that VM to be created, and even when we powered it on, once again we did not specify a physical location for that VM to run.  We just created it and turned it on.  Beyond the simplicity that presents, that highlights some very important concepts that make HC3 uniquely scalable and flexible.

In the last post, we discussed how HC3 VM virtual hard disk are actually stored as files (qcow2) format, and even how we can access those files for advanced operations. But once again, we didn’t discuss (or need to discuss) where the data for those files physically resided.

We will dig into this much further but the real magic of HC3 is that all nodes share access to a single pool of storage – all the disks in all nodes of the cluster are pooled together as a common resource for storing and retrieving data. That happens automatically when the HC3 cluster is “initialized” and requires no additional management by the user, even as additional nodes are added to the resource pool down the line.

Because all HC3-capable nodes read and write data using the entire pool of storage and all HC3 nodes have access to all the data, any virtual machine can be started on any node of the HC3 cluster based on the availability of compute resources that VM requires. For example, if a new VM requires 16GB RAM there may only be certain nodes with that much currently available and HC3 makes this node selection automatically. HC3 allows running VMs to be “live migrated” to other HC3 nodes without the VM being shutdown and with no noticeable impact to the workload being run or clients connecting to it. In addition, should an HC3 node fail, any VMs that were running on that node will be quickly re-started on remaining nodes since every HC3 node has access to the same pool of redundant storage.  Once again since the storage is available to all nodes in the HC3 system, the primary factor for determining where to failover VMs is availability of compute resources required by each workload and HC3 determines the optimal location automatically.

For those who are more visual (like me) the following diagram may help to picture this more clearly.

ICOSstack

 

In the next post of this series, we will dive into the HC3 distributed storage layer and detail how data is stored across all the physical disks in the cluster providing data redundancy should a disk drive fail and aggregating the I/O performance of all the drives in the system.

HC3 Under the Hood: Virtual Hard Disks and CD/DVD Images

In the last post of this series, we walked through the process of creating a VM on the HC3 platform.  In just a few clicks, you create a virtual machine container and allocate CPU, memory, and storage resources to that VM. When you start the VM and install your operating system and applications, that storage is presented as virtual hard disks and will appear like a virtual c:\ or d:\ drive to your applications.

HC3 Virtual Disks in Windows Device Manager
HC3 Virtual Disks in Windows Device Manager

Continue reading

HC3 Under the Hood: Creating a VM

In the last post of this series, we talked about how multiple independent HC3 nodes are joined together into a cluster that is managed like a single system with a single pool of storage and compute resources, as well as built-in redundancy for high availability.

For a quick review, the end-user view of this process is as simple as racking the nodes, connecting them to power and your network, giving them IP addresses, and assigning them to join a cluster.

You might expect after that there should be any number of steps required to configure individual disks into RAID arrays and spares. Then provision storage targets, sharing protocols and security, both physically and logically connect each shared storage resource to each compute resource with multiple redundant paths. Ultimately, configure a hypervisor to use that raw storage to create partitions and file systems to store individual data objects, such as virtual disks. Those would be the next steps with virtually ANY other system available.

Well, you don’t have to do any of that with HC3 because the storage layer is fully integrated with the compute hardware and virtualization software layers – all managed by the system.  Ah, management. So maybe now it’s time to install and configure a separate VM management server and management client software on your workstation to oversee all the virtualization hosts and software? Again, not with HC3 since the management layer is built-in and accessible simply by pointing your web browser to the cluster and logging in.

With HC3, you go right from configuring each node as a member of the HC3 system to pointing a web browser to HC3. And in a few clicks, you have created your first HC3 virtual machine.

This is definitely something that is best to see with your own eyes (or better yet, ask us for a demo and we will let YOU drive!). The HC3 system in this video already has a number of VMs running, but the process you will see here is exactly the same for the very first VM you create.

Creating a virtual machine is a simple process that is accomplished through the Scale Computing HC3 Manager web browser user interface. Selecting the ‘Create’ option from the ‘Virtualization’ tab allows the user to specify required and optional parameters for the virtual machine including:

• Number of virtual CPU cores

• RAM

• Number and size of virtual disks to create; and to

• Attach or upload a virtual DVD/CD ISO image for installing an operating system.

Creating the virtual machine not only persists those VM configuration parameters that will later tell the hypervisor how to create the virtual machine container when it is started, but it also physically creates files using the distributed storage pool that will contain the virtual hard disks to present to the VM once it is started. For maximum flexibility, we create those files inside a default storage pool container called VM to present a file/folder structure for organizing virtual machine files.  HC3 Virtual Machines are able to access their virtual hard disk files directly as if they are local disks, without the use of any SAN or NAS protocols, and can access those virtual disks from any node of the HC3 cluster – which is the key to capabilities like VM Live Migration and VM failover from node to node.

In the next post, we will dig into how HC3 virtual disks files are actually stored. As well as how you can optionally use the external storage protocol capabilities of HC3 to access and browse HC3 storage pools for VMs and ISO media image from remote machines.

 

Under The Hood: HC³ In Action – Cluster Formation

Previous posts in this series have discussed the ease of use and high availability design goals of the HC³ platform, as well as the hardware and high level software architecture. Now, lets roll up our sleeves and walk through how ICOS (Intelligent Clustered Operating System) takes a set of independent compute nodes with independent storage devices and aggregates them into a single pool of compute and storage resources that are managed as a single, redundant, highly available system.

Once the Scale HC³ cluster nodes are racked and cabled and configured with physical network connectivity, the cluster formation process takes multiple nodes (currently 3 or more) and logically bonds them together to act as a single coordinated system in a process that completes in a matter of minutes. Continue reading

Under the Hood – HC3 Architectural Design Goals

The first two posts of this series discussed the high availability and ease of use requirements that went into the design of HC3.  With those overarching user needs as a backdrop, we will now transition into a more technical look under the hood at the hardware and software aspects of the HC3 system.

HC3 and the ICOS (Intelligent Clustered Operating System) that it runs on were designed to put intelligence and automation into the software layer allowing the system to provide advanced functionality, flexibility and scalability using low cost hardware components, including the virtualization capabilities built into modern CPU architectures.  Rather than “scaling up” with larger, more expensive hardware that also requires equally expensive idle “standby capacity” to operate in the event of a failure, HC3 was designed to aggregate compute and storage resources from multiple systems into a single logical system with redundancy and availability designed in. Continue reading

Under the Hood – HC³ Ease of Use Design Goals

In the first “Under the Hood” series of posts, I introduced the high level design goal for HC³ and talked about the high availability benefits.

Our HC³ products were specifically designed to lower cost and complexity for IT administrators within small- to medium-sized organizations who need to run their applications in a highly available manner.

But high availability can be provided many other ways if you are willing to spend the money, integrate pieces together and have the human resources and skills to set it up and manage it.  So what does HC³ do differently?

Continue reading