Tag Archives: HyperCore

HyperCore v6 – A Closer Look at HC3’s New User Interface

They said it couldn’t be done! Scale has taken the easiest HyperConverged user interface and somehow made it simpler in HyperCore v6. HC3 offers a “set it and forget it” style to IT infrastructure. If we intend for our customers to forget about our product, the user interface has to be extremely intuitive when there is an event that requires an administrator to log in to the system (new VM/workload request, verifying an already “self-healed” hardware failure, etc.).

HyperCore v6 User Interface – Key Features

  • Streamlined workflows for administrators – 60% reduction in clicks during the VM creation workflow. Quicker access to VM consoles directly from Heads up Display (HUD).
  • New Intuitive Design – With the intelligence of HyperCore handling the heavy lifting of VM failover and data redundancy, administrators often employ a “set it and forget it” mentality where it is only required that they log in periodically to make changes to the system. This requires an intuitive interface with almost no learning curve.
  • Improved Responsiveness – The new HyperCore User Interface is extremely responsive with state changes and VM updates immediately accessible in the UI.
  • Tagging / Grouping – Users can now combine VMs into logical groups via tagging. Set multiple tags for easy filtering.
  • Filtering – Spotlight search functionality that filters VMs based on matching names, descriptions, tags for quick and easy access to VMs in larger environments.
  • Cluster Log – A single source for all of the historical activity on the cluster. Filter alerts by type or search for specific key words using the spotlight search to track historical data on the cluster.
  • UI Notification System – Pop up notifications that display in process user actions, alerts and processes present users with relevant information about active events on the system.
  • Unified Snapshot/Cloning/Replication Functionality – Snapshot, cloning and replication functionality are now integrated into the card view of each VM for easy administration.

 

User Interface Demonstrations

Anyone can say that they have a simple user interface, but it doesn’t count unless you can see that simplicity in action. Check out the demonstrations below:

Creating a VM on HC3 – HyperCore v6

Cloning a VM on HC3 – HyperCore v6

Snapshot a VM on HC3 – HyperCore v6

HyperCore v6 – A Closer Look at Built-in Remote Disaster Recovery

As you saw in last week’s press release, Scale Computing’s HC3 now includes VM-level replication as a key new feature in HyperCore v6. Administrators can now setup replication on a per VM basis for integrated remote Disaster Recovery, which builds on the already unique snapshot and cloning functionality built into HyperCore v5. Since the introduction of HyperCore v5, users could manually take near-instant, VM-level snapshots that are easily cloned in an extremely space efficient manner (“thin clones”).

Now in version 6, HyperCore allows users to set up continuous replication to a secondary HC3 cluster, which will automatically take a snapshot on the selected VMs, moving only the unique blocks to the remote site.

Then, to restore on the secondary cluster, simply clone the VM from the latest (or previous) automated or manual snapshot. It makes disaster recovery testing a breeze to be able to spin up these VMs quickly and on their own private network. Of course, if this isn’t a test and your VM at the secondary site is now production, HC3 continues to track the unique blocks that are created and ONLY sends those blocks back to the primary site when its time to fail back.

Replication Highlights:

  • Continuous VM-level Replication – HyperCore makes use of its space efficient snapshot technology to replicate to a secondary site, tracking only the blocks unique to each snapshot and sending the change blocks.
  • Low RPO/RTO – Simply “clone” a snapshot on the target cluster for the manual failover of a VM that is immediately bootable.
  • Simple Disaster Recovery Testing – Testing a DR infrastructure plan is now as simple as cloning a snapshot on the target cluster and starting a VM. No disruption to ongoing replication.
  • Easy Failback after Disaster Recovery – After running a VM at the DR site, simply replicate the changed data back to the primary site for simple failback.

Bring on the Demo!

There is nothing quite like a demonstration of this new technology. In this video you’ll see a number of things….

  1. Remote Connection Setup (0:08) – You’ll see me create a connection from my primary cluster (left) to a secondary cluster (right). Once the clusters are securely connected, I can then enable replication on any VMs between those two clusters.
  2. Replication Setup (0:40) and Initial Replication (1:05) – After cloning a VM, you’ll see me set up replication on that VM to the secondary cluster. The initial replication is time-lapsed, but you’ll see the progress on the snapshot view in on the Primary cluster (left) and after it completes, the clone-able snapshot on the secondary cluster.
  3. Failover Test 1 (1:38) Automated Snasphot – I clone the VM from the snapshot, which is immediately bootable. That’s about as easy as it gets for DR testing!
  4. Failover Test 2 (1:58) Manual Snapshot – After making some changes to the VM (“replication” file on the desktop), I create a manual snapshot. Notice that the blocks unique to that snapshot are tracked separately from the initial replication snapshot (3:32). When I clone from the manual snapshot, you’ll see the “replication” text file appear on the desktop. DR plan tested again!
  5. Failback (4:30) – After making changes to the cloned VM on the secondary site (“Replication – Rollback”), I simply set up replication on the cloned VM back to the primary cluster. Since the majority of the data already exists at the primary site, it takes almost no time for my minor changes to replicate back. Once there, I simply clone the snapshot and I’m back in action on the primary cluster. (Note: Here (5:23) I also disconnect the NIC to spin this VM up without conflicting with my actual production VM…a nice trick for that DR testing!).

HyperCore v5 – A Closer Look at Snapshots and Cloning

Now that we have moved HyperCore v5 to General Availability, let’s dive into some of the new features that are now available.

  • VM-level Snapshots – Near-instant VM-level snapshots with no disruption at the time of the snapshot, no duplication of data, and no performance degradation even with thousands of snapshots per VM (> 5,000 supported per VM!). A snapshot can be simply “cloned” to start a VM while still maintaining the integrity of other snapshots both upstream and downstream from the cloned snapshot.
  • VM “Thin” Cloning – Enables the user to take a space efficient approach to cloning a VM. Thin clones are immediately bootable, and are similarly unduplicated using the Allocate-on-Write technology.

I write about these features together because they rely on the same underlying awesomeness built into the SCRIBE storage layer (Scale Computing Reliable Independent Block Engine).  SCRIBE employs an Allocate-on-Write technique for snapshotting data.

Allocate-on-Write explained: At the time of a snapshot, SCRIBE updates the metadata to mark the blocks as being referenced by the snapshot (no change to the underlying data, so near instant).  Then as changes to the snapshotted data are written, SCRIBE allocates new space within the cluster, writes the data and then updates the metadata of the original data.  This eliminates the overhead penalty of the three step process (read, rewrite, then write) normally associated with a Copy-on-Write approach to snapshots. 

To restore a snapshot, you simply clone it.  This creates an immediately bootable, “thin” clone of the VM, meaning that you can instantly spin it up and consume no more space than the original VM.  The unique blocks are tracked for each clone, so this is extremely space efficient.  Whether it is a single VM or 100+ clones of that VM, the storage is thinly provisioned and only consumes the space of the original VM until you begin making changes to the individual VMs.

Cloning from a Template

We find that many users make template VMs.  They take a Windows VM, customize it for their business with their specific tool set, etc., patch it and then sys prep it (Windows tool for preparing a master Windows VM to shorten the setup with just user specific settings going forward).  When it comes time to spin up a new VM, they can then clone that template VM and have an immediately bootable clone ready to go.  And going back to the “thin” clone concept…the common OS blocks are only stored once among all of the cloned VMs.

Cloning for Test/Dev

Another great use of the cloning technology is for testing changes to your production VMs in an isolated environment.   Users take advantage of this for testing patches/making changes by cloning their running VMs (yes you can clone a running VM) and booting them in a “lab” network by either assigning a specific lab related VLAN tag or by disconnecting the NIC ahead of booting up the VM.  Then they can apply and test out changes to their VMs knowing that there will be no impact to their production environment.

In the videos below, we demonstrate these features in more detail.  Please let us know if there are any questions we can answer about this new functionality.  Look for more updates related to the other new features coming soon to a blog post near you.

How to take a snapshot on HC3

How to clone a VM on HC3

 

HyperCore v5 –A Closer Look at One Click Rolling Upgrades

As noted in a previous post, HyperCore v5 is now Generally Available and shipping on all HC3 node types. In this “A Closer Look…” series of blog posts we’ll be going through the majority of these new features in more detail. Today’s topic…One Click Rolling Upgrades:

  • Non-Disruptive / Rolling Updates ‒ HC3 clusters can be upgraded with no downtime or maintenance window. Workloads are automatically live-migrated across the HC3 appliance to allow for node upgrades, even if such an upgrade requires a node-reboot. Workloads are then returned to the node after the upgrade is complete.

 

Included in HyperCore v5 is our one click rolling upgrade feature…and our customers love it! Customers covered under ScaleCare – our 24×7 support offering that covers both hardware and software – are alerted of HyperCore updates as they become generally available via the user interface. There is nothing more to license when new updates become available, which means that as new features are developed for HyperCore, our current users can take full advantage of them.

When a user kicks off an upgrade, this sets into motion a series of events that updates the nodes in the cluster while keeping the VMs up and running throughout the process. The upgrade starts with a single node by live migrating the VMs from that node to the other nodes in the cluster. Keep in mind that the best practice is to keep enough available resources available to tolerate a node failure in your environment. This same concept holds true for rolling upgrades and users are alerted if they do not meet this condition (and prevented from upgrading if true).

Insufficient Free Memory

After the VMs are live migrated off of the first node, the full OS stack is updated, rebooting the node if required. Once that node is brought back online and has rejoined the cluster, the VMs are then returned to their original position and the upgrade process moves on to node 2 repeating the process. This continues through each node in the cluster until the system in its entirety is running the latest code. No VM downtime, no maintenance window required.