Tag Archives: failover

3-Node Minimum? Not So Fast

For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration.  Why now?

Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

Screen Shot 2016-07-18 at 2.06.52 PM

In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

Replication

Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

Screenshot 2016-07-13 09.34.07

The Next-Generation Server Room

There was a recent article on Network Computing regarding the Next Generation Data Center that got me thinking about our SMB target customer and the next generation server room.  Both the enterprise and the SMB face the influx of traffic growth described in the article (clearly at different levels, but an influx none-the-less).  So how will the SMB cope?  How will an IT organization with the limited resources of time and money react?  By focusing on Simplicity in the infrastructure.

Elimination of Legacy Storage Protocols through Hypervisor Convergence

There is an ongoing trend to virtualize workloads in the SMB that traditionally meant adding a SAN or a NAS to provide shared storage for high availability.  With the introduction of Hypervisor Converged architectures through products like Scale’s HC3, that requirement no longer exists.  In this model, end users can take advantage of the benefits of high availability without the complexity that comes with legacy storage protocols like iSCSI or NFS.  Not only does this reduce the management overhead of the shared storage, it also simplifies the vendor support model dramatically.  In the event of an issue, a single vendor can be called for support with no ability to place the blame on another component in the stack.

Simplicity in Scaling

Moore’s Law continues to hold as better, faster and cheaper equipment becomes available year after year.  By implementing a scale-out architecture in the infrastructure, IT organizations can take advantage of this by purchasing what they need today knowing that they can purchase equipment at tomorrow’s prices to scale-out the resources when the need arises.  Combined with the ability to mix and match hardware types in a hypervisor converged model also means that users have granularity in their scaling to match the requirements of the workloads at that time (such as adding a storage-only node in HC3 to a compute cluster to scale out only the storage resources). Continue reading

HC3 Under The Hood: Virtual Machine Placement and Failover

A few posts ago we walked through the process of creating a new VM on HC3. One thing you may have noticed is that nowhere in the process did we specify a physical location for that VM to be created, and even when we powered it on, once again we did not specify a physical location for that VM to run.  We just created it and turned it on.  Beyond the simplicity that presents, that highlights some very important concepts that make HC3 uniquely scalable and flexible.

In the last post, we discussed how HC3 VM virtual hard disk are actually stored as files (qcow2) format, and even how we can access those files for advanced operations. But once again, we didn’t discuss (or need to discuss) where the data for those files physically resided.

We will dig into this much further but the real magic of HC3 is that all nodes share access to a single pool of storage – all the disks in all nodes of the cluster are pooled together as a common resource for storing and retrieving data. That happens automatically when the HC3 cluster is “initialized” and requires no additional management by the user, even as additional nodes are added to the resource pool down the line.

Because all HC3-capable nodes read and write data using the entire pool of storage and all HC3 nodes have access to all the data, any virtual machine can be started on any node of the HC3 cluster based on the availability of compute resources that VM requires. For example, if a new VM requires 16GB RAM there may only be certain nodes with that much currently available and HC3 makes this node selection automatically. HC3 allows running VMs to be “live migrated” to other HC3 nodes without the VM being shutdown and with no noticeable impact to the workload being run or clients connecting to it. In addition, should an HC3 node fail, any VMs that were running on that node will be quickly re-started on remaining nodes since every HC3 node has access to the same pool of redundant storage.  Once again since the storage is available to all nodes in the HC3 system, the primary factor for determining where to failover VMs is availability of compute resources required by each workload and HC3 determines the optimal location automatically.

For those who are more visual (like me) the following diagram may help to picture this more clearly.

ICOSstack

 

In the next post of this series, we will dive into the HC3 distributed storage layer and detail how data is stored across all the physical disks in the cluster providing data redundancy should a disk drive fail and aggregating the I/O performance of all the drives in the system.