All posts by Craig Theriac

HyperCore v6 – A Closer Look at HC3’s New User Interface

They said it couldn’t be done! Scale has taken the easiest HyperConverged user interface and somehow made it simpler in HyperCore v6. HC3 offers a “set it and forget it” style to IT infrastructure. If we intend for our customers to forget about our product, the user interface has to be extremely intuitive when there is an event that requires an administrator to log in to the system (new VM/workload request, verifying an already “self-healed” hardware failure, etc.).

HyperCore v6 User Interface – Key Features

  • Streamlined workflows for administrators – 60% reduction in clicks during the VM creation workflow. Quicker access to VM consoles directly from Heads up Display (HUD).
  • New Intuitive Design – With the intelligence of HyperCore handling the heavy lifting of VM failover and data redundancy, administrators often employ a “set it and forget it” mentality where it is only required that they log in periodically to make changes to the system. This requires an intuitive interface with almost no learning curve.
  • Improved Responsiveness – The new HyperCore User Interface is extremely responsive with state changes and VM updates immediately accessible in the UI.
  • Tagging / Grouping – Users can now combine VMs into logical groups via tagging. Set multiple tags for easy filtering.
  • Filtering – Spotlight search functionality that filters VMs based on matching names, descriptions, tags for quick and easy access to VMs in larger environments.
  • Cluster Log – A single source for all of the historical activity on the cluster. Filter alerts by type or search for specific key words using the spotlight search to track historical data on the cluster.
  • UI Notification System – Pop up notifications that display in process user actions, alerts and processes present users with relevant information about active events on the system.
  • Unified Snapshot/Cloning/Replication Functionality – Snapshot, cloning and replication functionality are now integrated into the card view of each VM for easy administration.

 

User Interface Demonstrations

Anyone can say that they have a simple user interface, but it doesn’t count unless you can see that simplicity in action. Check out the demonstrations below:

Creating a VM on HC3 – HyperCore v6

Cloning a VM on HC3 – HyperCore v6

Snapshot a VM on HC3 – HyperCore v6

HyperCore v6 – A Closer Look at Built-in Remote Disaster Recovery

As you saw in last week’s press release, Scale Computing’s HC3 now includes VM-level replication as a key new feature in HyperCore v6. Administrators can now setup replication on a per VM basis for integrated remote Disaster Recovery, which builds on the already unique snapshot and cloning functionality built into HyperCore v5. Since the introduction of HyperCore v5, users could manually take near-instant, VM-level snapshots that are easily cloned in an extremely space efficient manner (“thin clones”).

Now in version 6, HyperCore allows users to set up continuous replication to a secondary HC3 cluster, which will automatically take a snapshot on the selected VMs, moving only the unique blocks to the remote site.

Then, to restore on the secondary cluster, simply clone the VM from the latest (or previous) automated or manual snapshot. It makes disaster recovery testing a breeze to be able to spin up these VMs quickly and on their own private network. Of course, if this isn’t a test and your VM at the secondary site is now production, HC3 continues to track the unique blocks that are created and ONLY sends those blocks back to the primary site when its time to fail back.

Replication Highlights:

  • Continuous VM-level Replication – HyperCore makes use of its space efficient snapshot technology to replicate to a secondary site, tracking only the blocks unique to each snapshot and sending the change blocks.
  • Low RPO/RTO – Simply “clone” a snapshot on the target cluster for the manual failover of a VM that is immediately bootable.
  • Simple Disaster Recovery Testing – Testing a DR infrastructure plan is now as simple as cloning a snapshot on the target cluster and starting a VM. No disruption to ongoing replication.
  • Easy Failback after Disaster Recovery – After running a VM at the DR site, simply replicate the changed data back to the primary site for simple failback.

Bring on the Demo!

There is nothing quite like a demonstration of this new technology. In this video you’ll see a number of things….

  1. Remote Connection Setup (0:08) – You’ll see me create a connection from my primary cluster (left) to a secondary cluster (right). Once the clusters are securely connected, I can then enable replication on any VMs between those two clusters.
  2. Replication Setup (0:40) and Initial Replication (1:05) – After cloning a VM, you’ll see me set up replication on that VM to the secondary cluster. The initial replication is time-lapsed, but you’ll see the progress on the snapshot view in on the Primary cluster (left) and after it completes, the clone-able snapshot on the secondary cluster.
  3. Failover Test 1 (1:38) Automated Snasphot – I clone the VM from the snapshot, which is immediately bootable. That’s about as easy as it gets for DR testing!
  4. Failover Test 2 (1:58) Manual Snapshot – After making some changes to the VM (“replication” file on the desktop), I create a manual snapshot. Notice that the blocks unique to that snapshot are tracked separately from the initial replication snapshot (3:32). When I clone from the manual snapshot, you’ll see the “replication” text file appear on the desktop. DR plan tested again!
  5. Failback (4:30) – After making changes to the cloned VM on the secondary site (“Replication – Rollback”), I simply set up replication on the cloned VM back to the primary cluster. Since the majority of the data already exists at the primary site, it takes almost no time for my minor changes to replicate back. Once there, I simply clone the snapshot and I’m back in action on the primary cluster. (Note: Here (5:23) I also disconnect the NIC to spin this VM up without conflicting with my actual production VM…a nice trick for that DR testing!).

HC3 – HyperCore v6 Launch

We are thrilled to announce the launch of HyperCore v6! As the leader of hyperconverged solutions in small- and mid-sized organizations, Scale has again expanded the HyperCore feature set to address the challenges faced by this targeted segment of the market.

So, what are the new features? Replication and a new, streamlined User Interface.

Replication – Built-in Remote Disaster Recovery

The feature that has customers most excited is our built-in replication for remote disaster recovery, which allows admins to protect their critical workloads by replicating them to a secondary HC3 cluster. This can be set up on a per VM basis and offers continuous replication of snapshots for a quick restore (“clone”) at the secondary site in the event of a full site level disaster.

What’s better than having a DR plan? Having a fully tested DR plan. The built-in replication allows users to quickly spin up a VM from a snapshot at the remote site on its own private network (or just disconnect the NICs altogether) in a matter of seconds.

In the event of a true failover where the VMs at the secondary site become production VMs, the intelligence in HyperCore tracks the unique blocks created at the secondary site making failback a breeze. Instead of sending over every block of data in the VM, simply replicate back just those unique blocks to restore the VM on your primary cluster.

  • Replication – Built-in Remote Disaster Recovery
  • Continuous VM-level Replication
  • Low RPO/RTO
  • Simple Disaster Recovery Testing
  • Easy Failback after Disaster Recovery

Streamlined User Interface

In addition to the replication functionality, we have also streamlined the most common workflows for admins through a new user interface (60% reduction in clicks during the VM creation process and immediate access to the VM consoles directly from the Heads up Display!). The simplicity baked into HyperCore v6 is unmatched and we invite you to join us on a weekly demo to see it in action.

  • New User Interface
  • New Intuitive Design
  • Improved Responsiveness
  • Tagging / Grouping
  • Filtering
  • UI Notification System
  • Unified Snapshot/Cloning/Replication Functionality

Stay tuned additional blog posts detailing this new functionality.

HyperCore v5 – A Closer Look at Snapshots and Cloning

Now that we have moved HyperCore v5 to General Availability, let’s dive into some of the new features that are now available.

  • VM-level Snapshots – Near-instant VM-level snapshots with no disruption at the time of the snapshot, no duplication of data, and no performance degradation even with thousands of snapshots per VM (> 5,000 supported per VM!). A snapshot can be simply “cloned” to start a VM while still maintaining the integrity of other snapshots both upstream and downstream from the cloned snapshot.
  • VM “Thin” Cloning – Enables the user to take a space efficient approach to cloning a VM. Thin clones are immediately bootable, and are similarly unduplicated using the Allocate-on-Write technology.

I write about these features together because they rely on the same underlying awesomeness built into the SCRIBE storage layer (Scale Computing Reliable Independent Block Engine).  SCRIBE employs an Allocate-on-Write technique for snapshotting data.

Allocate-on-Write explained: At the time of a snapshot, SCRIBE updates the metadata to mark the blocks as being referenced by the snapshot (no change to the underlying data, so near instant).  Then as changes to the snapshotted data are written, SCRIBE allocates new space within the cluster, writes the data and then updates the metadata of the original data.  This eliminates the overhead penalty of the three step process (read, rewrite, then write) normally associated with a Copy-on-Write approach to snapshots. 

To restore a snapshot, you simply clone it.  This creates an immediately bootable, “thin” clone of the VM, meaning that you can instantly spin it up and consume no more space than the original VM.  The unique blocks are tracked for each clone, so this is extremely space efficient.  Whether it is a single VM or 100+ clones of that VM, the storage is thinly provisioned and only consumes the space of the original VM until you begin making changes to the individual VMs.

Cloning from a Template

We find that many users make template VMs.  They take a Windows VM, customize it for their business with their specific tool set, etc., patch it and then sys prep it (Windows tool for preparing a master Windows VM to shorten the setup with just user specific settings going forward).  When it comes time to spin up a new VM, they can then clone that template VM and have an immediately bootable clone ready to go.  And going back to the “thin” clone concept…the common OS blocks are only stored once among all of the cloned VMs.

Cloning for Test/Dev

Another great use of the cloning technology is for testing changes to your production VMs in an isolated environment.   Users take advantage of this for testing patches/making changes by cloning their running VMs (yes you can clone a running VM) and booting them in a “lab” network by either assigning a specific lab related VLAN tag or by disconnecting the NIC ahead of booting up the VM.  Then they can apply and test out changes to their VMs knowing that there will be no impact to their production environment.

In the videos below, we demonstrate these features in more detail.  Please let us know if there are any questions we can answer about this new functionality.  Look for more updates related to the other new features coming soon to a blog post near you.

How to take a snapshot on HC3

How to clone a VM on HC3

 

HyperCore v5 –A Closer Look at One Click Rolling Upgrades

As noted in a previous post, HyperCore v5 is now Generally Available and shipping on all HC3 node types. In this “A Closer Look…” series of blog posts we’ll be going through the majority of these new features in more detail. Today’s topic…One Click Rolling Upgrades:

  • Non-Disruptive / Rolling Updates ‒ HC3 clusters can be upgraded with no downtime or maintenance window. Workloads are automatically live-migrated across the HC3 appliance to allow for node upgrades, even if such an upgrade requires a node-reboot. Workloads are then returned to the node after the upgrade is complete.

 

Included in HyperCore v5 is our one click rolling upgrade feature…and our customers love it! Customers covered under ScaleCare – our 24×7 support offering that covers both hardware and software – are alerted of HyperCore updates as they become generally available via the user interface. There is nothing more to license when new updates become available, which means that as new features are developed for HyperCore, our current users can take full advantage of them.

When a user kicks off an upgrade, this sets into motion a series of events that updates the nodes in the cluster while keeping the VMs up and running throughout the process. The upgrade starts with a single node by live migrating the VMs from that node to the other nodes in the cluster. Keep in mind that the best practice is to keep enough available resources available to tolerate a node failure in your environment. This same concept holds true for rolling upgrades and users are alerted if they do not meet this condition (and prevented from upgrading if true).

Insufficient Free Memory

After the VMs are live migrated off of the first node, the full OS stack is updated, rebooting the node if required. Once that node is brought back online and has rejoined the cluster, the VMs are then returned to their original position and the upgrade process moves on to node 2 repeating the process. This continues through each node in the cluster until the system in its entirety is running the latest code. No VM downtime, no maintenance window required.

 

HyperConvergence for the SMB

Scott D. Lowe authored a fantastic article on HyperConverged.org last week that focused on where HyperConvergence is NOT a fit.  It is not an angle you hear often from a proponent of HyperConvergence and I have to admit…I like it.

At Scale, we have a laser-like focus on serving the IT infrastructure needs of small-to-medium sized businesses.  Similar to Scott Lowe’s approach in his article, it is as important to define our target customer as it is to define who is NOT our target customer.  When it comes down to it, a large company who has IT employees that specialize in every component in the infrastructure (think SAN or network admin, etc.) may never fully appreciate the simplicity of HC3 or may even be somewhat threatened by it. Continue reading

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

Video: How to Add Resources to HC3

With an infrastructure refresh on the horizon, a common question asked in IT used to be:

“What should I buy today that will meet my storage demand over the next X years?”

Historically, that is because IT groups needed to purchase today what they would need 3-5 years from now in order to push out a painful forklift upgrade that would inevitably come with reaching max capacity in a monolithic storage array.  After the introduction of “scale-out” storage (where you were no longer locked into the capacity limitations of a single physical storage array), the question then became:

“What should I buy today that will meet grow alongside my storage demand over the next X years?”

This meant that customers could buy what they needed for storage today knowing that they could add to their environment to scale-out the storage capacity and performance down the road.  There were no forklift upgrades or data migrations to deal with.  Instead, it offered the seamless scaling of storage resources to match the needs of the business.

Now with hyperconverged solutions like HC3 where the scale-out architecture allows users to easily add nodes to infrastructure to scale out both the compute and storage, the question has changed yet again.  Hyperconverged customers now ask themselves:

“What should I buy today that will meet grow alongside my storage infrastructure demand over the next X years?”

Adding nodes to HC3 is simple.  After racking and plugging in power/networking, users simply assign an IP address and initialize the node.  HyperCore (HC3’s ultra-easy software) then takes over from there seamlessly aggregating the resources of that node in with the rest of the HC3 cluster.  There is no disruption to the running VMs.  In fact, the newly added spindles are immediately available to the running VMs giving an immediate performance boost with each node added to the cluster.  Check out the demo below to see HC3’s scalability in action!

 

 

SMB IT Challenges

There was a recent article that focused on the benefits that city, state and local governments have gained from implementing HyperConvergence (Side Note: for anyone interested in joining, it was brought to my attention on a new HyperConvergence group on LinkedIn where such articles are being posted and discussed).  The benefits cited in the article were:

  • Ease of management,
  • Fault tolerance,
  • Redundancy, and late in the article…
  • Scalability.

I’m sure it isn’t surprising given our core messaging around Scale’s HC3 (Simplicity, High Availability and Scalability), but I agree wholeheartedly with the assessment.

It occurred to me that the writer literally could have picked any industry and the same story could have been told.  When the IT Director from Cochise County, AZ says:

“I’ve seen an uptick in hardware failures that are directly related to our aging servers”,

It could just as easily have been the Director of IT at the manufacturing company down the street.  Or when the City of Brighton, Colorado’s Assistant Director of IT is quoted as saying,

“The demand (for storage and compute resources) kept growing and IT had to grow along with it”,

That could have come out of the mouth of just about any of the customers I talk to each week. Continue reading

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – Reference Architecture (Part 2 of 4)

Infrastructure Convergence Continuum
Infrastructure Convergence Continuum

 

In our last post on the Infrastructure Convergence Continuum, we focused on the Build Your Own / DIY Architecture for virtualization infrastructure.  There are architectural limitations with this implementation that we addressed in the first post (“the inverted pyramid of doom”) that may be worth reviewing as a baseline understanding for today’s post.  Why? Spoiler Alert: They share the same architecture as the Reference Architecture and Converged Architecture we’ll be covering in today’s post. Continue reading

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×