Tag Archives: Replication

3-Node Minimum? Not So Fast

For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration.  Why now?

Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

Screen Shot 2016-07-18 at 2.06.52 PM

In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

Replication

Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

Screenshot 2016-07-13 09.34.07

HyperCore v6 – A Closer Look at Built-in Remote Disaster Recovery

As you saw in last week’s press release, Scale Computing’s HC3 now includes VM-level replication as a key new feature in HyperCore v6. Administrators can now setup replication on a per VM basis for integrated remote Disaster Recovery, which builds on the already unique snapshot and cloning functionality built into HyperCore v5. Since the introduction of HyperCore v5, users could manually take near-instant, VM-level snapshots that are easily cloned in an extremely space efficient manner (“thin clones”).

Now in version 6, HyperCore allows users to set up continuous replication to a secondary HC3 cluster, which will automatically take a snapshot on the selected VMs, moving only the unique blocks to the remote site.

Then, to restore on the secondary cluster, simply clone the VM from the latest (or previous) automated or manual snapshot. It makes disaster recovery testing a breeze to be able to spin up these VMs quickly and on their own private network. Of course, if this isn’t a test and your VM at the secondary site is now production, HC3 continues to track the unique blocks that are created and ONLY sends those blocks back to the primary site when its time to fail back.

Replication Highlights:

  • Continuous VM-level Replication – HyperCore makes use of its space efficient snapshot technology to replicate to a secondary site, tracking only the blocks unique to each snapshot and sending the change blocks.
  • Low RPO/RTO – Simply “clone” a snapshot on the target cluster for the manual failover of a VM that is immediately bootable.
  • Simple Disaster Recovery Testing – Testing a DR infrastructure plan is now as simple as cloning a snapshot on the target cluster and starting a VM. No disruption to ongoing replication.
  • Easy Failback after Disaster Recovery – After running a VM at the DR site, simply replicate the changed data back to the primary site for simple failback.

Bring on the Demo!

There is nothing quite like a demonstration of this new technology. In this video you’ll see a number of things….

  1. Remote Connection Setup (0:08) – You’ll see me create a connection from my primary cluster (left) to a secondary cluster (right). Once the clusters are securely connected, I can then enable replication on any VMs between those two clusters.
  2. Replication Setup (0:40) and Initial Replication (1:05) – After cloning a VM, you’ll see me set up replication on that VM to the secondary cluster. The initial replication is time-lapsed, but you’ll see the progress on the snapshot view in on the Primary cluster (left) and after it completes, the clone-able snapshot on the secondary cluster.
  3. Failover Test 1 (1:38) Automated Snasphot – I clone the VM from the snapshot, which is immediately bootable. That’s about as easy as it gets for DR testing!
  4. Failover Test 2 (1:58) Manual Snapshot – After making some changes to the VM (“replication” file on the desktop), I create a manual snapshot. Notice that the blocks unique to that snapshot are tracked separately from the initial replication snapshot (3:32). When I clone from the manual snapshot, you’ll see the “replication” text file appear on the desktop. DR plan tested again!
  5. Failback (4:30) – After making changes to the cloned VM on the secondary site (“Replication – Rollback”), I simply set up replication on the cloned VM back to the primary cluster. Since the majority of the data already exists at the primary site, it takes almost no time for my minor changes to replicate back. Once there, I simply clone the snapshot and I’m back in action on the primary cluster. (Note: Here (5:23) I also disconnect the NIC to spin this VM up without conflicting with my actual production VM…a nice trick for that DR testing!).