Is Hyperconvergence Here to Stay?

As virtualization continues to evolve into hyperconvergence, cloud, and other technologies, both IT professionals and analysts are looking into the future to predict the velocity, direction, and longevity of these technologies. Business leaders want their spending decisions to be based on the latest intel and technologies that will carry their business into the future.

Analysts George J. Weiss and Andrew Butler from Gartner have put together their predictions on the future of hyperconvergence in this recent report: Prepare for the Next Phase of Hyperconvergence

Scale Computing has been a leader in hyperconvergence, helping define the current evolution of hyperconvergence with innovative storage architecture and unsurpassed simplicity.  Whatever the future holds for the evolution of hyperconvergence, Scale Computing plans to be at the forefront of hyperconvergence innovation.  We expect to continue delivering the simplicity, scalability, and availability our customers have come to expect from hyperconvergence.

One Customer’s Experience With Scale Computing

At Scale Computing, we do our best not only to build the best solutions for our customers, but also to explain why our solutions really are the best to those still deciding on a solution. In reality, no one can explain it as well as one of our actual customers.

This week we have the opportunity to share the Scale Computing experience of Nathan Beam of Bridgetree in his own words, on his own blog. Here is the link:

Simply Hyper-converged – An Overview of Scale Computing’s Easy-To-Use HC3 Virtualization Platform

Just to pull a quick quote: “My own experience and pretty much that of every other customer testifies to the fact that we all love our product. I searched long and hard trying to find unhappy owners of HC3 equipment… to this day I still don’t know if any exist.”

We look forward to sharing more of our user experiences with you in the future. If you are another HC3 user who wants to share your story here, contact me: dpaquette@scalecomputing.com. To see some of our other customer success stories, check out our case studies. For additional customer reviews, check out our page on the Spiceworks Community.

4 Lessons from the AWS Outage Last Week

The Amazon Web Services (AWS) Simple Storage Service (S3) experienced an outage on Tuesday last week and was down for several hours. S3 is object storage for around 150,000 websites and other services according to SimilarTech. For IT professionals, here are four takeaways from this outage.

#1 – It Happens

No infrastructure in immune to outages. No matter how big the provider, outages happen and downtime occurs. Whether you are hosting infrastructure yourself or relying on a third party, outages will happen eventually.  Putting your eggs in someone else’s basket does not necessarily buy you any more peace of mind. In this case, S3 was brought down by a simple typo from a single individual. That is as little as it takes to cause so much disruption. The premiums you pay to be hosted on a massive infrastructure like AWS will never prevent the inevitable failures, no matter how massive any platform becomes.

#2 – The Bigger They Are, the Harder They Fall

When a service is as massive as AWS, problems affect millions of users like customers trying to do businesses with companies using S3. Yes, outages do happen but do they have to take down so much of the internet with them when they do?  Like the DDOS attack I blogged about last fall, companies leave themselves open to these massive outages when they rely heavily on public cloud services. How much more confidence in your business would your customers have if they heard about a massive outage on the news but knew that your systems were unaffected?

#3 – It’s No Use Being an Armchair Quarterback

When an outage occurs with your third party provider, you call, you monitor, and you wait. You hear about what is happening and all you can do is shake your fist in the air knowing that you probably could have done better to either prevent the issue or resolve it more quickly if you were in control. But you aren’t in any position to do anything because you are reliant on the hoster. You have no option but to simply accept the outage and try to make up for the loss to your business. You gave up your ability to fix the problem when you gave that responsibility to someone else.

Just two weeks ago, I blogged about private cloud and why some organizations feel they can’t rely on hosted solutions because of any number of failures they would have no control over. If you need control of your solution to mitigate risk, you can’t also give that control to a third party.

#4 – Have a Plan

Cloud services are a part of IT these days and most companies are already doing some form of hybrid cloud with some services hosted locally and some hosted in the cloud. Cloud-based applications like Salesforce, Office365, and Google Docs have millions of users. It is inevitable that some of your services will be cloud-based, but they don’t all have to be. There are plenty of solutions like hyperconverged infrastructure to host many services locally with the simplicity of cloud infrastructure. When outages at cloud providers occur, make sure you have sufficient infrastructure in place locally so that you can do more than just be an armchair quarterback.


Public cloud services may be part of your playbook but they don’t have to be your endgame. Take control of your data center and have the ability to navigate your business through outages without being at the mercy of third party providers. Have a plan, have an infrastructure, and be ready for the next time the internet breaks.

When Cloud is Not what You Signed Up For

The AWS S3 outage on Tuesday confirmed the worst fears of many that bigger is not better. Three hours of outage for 150,000 or so websites and other services, because of some internal issues at S3. What we saw yet again yesterday was that a massive data center like S3 proved to be no more reliable than private data centers happily achieving five 9’s.

The real issue here is not that there was an outage. The outage was unfortunately just an inevitability that proves no infrastructure is invulnerable. No, the real issue is the perception that a cloud service like AWS can be made too big to fail. Instead, what we saw was that the bigger they are, the harder they fall.

Now, I like public cloud services and I use them often.  In fact, I used Google Docs to type a draft of this very blog post. However, would I trust my business critical data to public cloud? Probably not. Maybe I am old fashioned but I have had enough issues with outages of either internet services or cloud services to make me a believer in investing in private infrastructure.

The thing about public cloud is that it offers simplicity. Just login and manage VMs or applications without ever having to worry about a hard drive failure or a power supply going wonky. And that simplicity comes at a premium with the idea that you will save money by only using what you need without having to over-provision, like you would expect to do with buying your own gear. That seems like wishful thinking to me, because in my experience, managing costs with cloud computing can be a tricky business and it can be a full-time job to make sure you aren’t spending more than you intend.

Is the cost of managing private infrastructure even more? You must buy servers, storage, hypervisors, management solutions, and backup/DR, right? Not anymore. Hyperconverged infrastructure (HCI) is about delivering infrastructure that is pre-integrated and so easy to manage that the experience of using it is the same as using cloud. In fact, just last week I talked about how it really is a private cloud solution.

What is the benefit of owning your own infrastructure? First: Control. You get to control your fate with the ability to better plan for and respond to disaster and failure, mitigating risk to your level of satisfaction. No one wants to be sitting on their hands, waiting, while their cloud provider is supposedly working hard to fix the outage. Second, Cost. Costs are more predictable with HCI and there is less over-provisioning than with traditional virtualization solutions. There are also no ongoing monthly premium costs for the third party who is supposed to be eliminating the risk of downtime.

Cloud just isn’t the indestructible castle in the sky that we were meant to believe it was. Nothing is, but with HCI, you get your own castle and you get to rule it the way you see fit. You won’t be stuck waiting to see if all the king’s horses and all the king’s men can put Humpty back together again.

Is Hyperconvergence the Private Cloud You Need?

If you are an IT professional, you are most likely familiar with at least the term “hyperconvergence” or “hyperconverged infrastructure”. You are also undoubtedly aware of cloud technology and some of the options for public, private, and hybrid cloud.  Still, this discussion merits a brief review of private cloud before delving into how hyperconvergence fits into the picture.

What is a Private Cloud?

The basic premise behind cloud technology is an abstraction of the management of VMs from the underlying hardware infrastructure. In a public cloud, the infrastructure is owned and hosted by someone else, making it completely transparent. In a private cloud, you own the infrastructure and still need to manage it, but the cloud management layer simplifies day-to-day operation of VMs compared to traditional virtualization.

Traditional virtualization is complicated by managing hypervisors running on individual virtual hosts and managing storage across hosts. When managing a single virtual host, VM creation and management is fairly simple. In a private cloud, you still have that underlying infrastructure of multiple hosts, hypervisors, and storage, but the cloud layer provides the same simple management experience of a single host but spread across the whole data center infrastructure.

Many organizations who are thinking of implementing private cloud are also thinking of implementing public cloud, creating a hybrid cloud consisting of both public and privately hosted resources.  Public cloud offers added benefits for pay-per-use elasticity for seasonal business demands and cloud-based applications for productivity.

Why Not Put Everything in Public Cloud?

Many organizations have sensitive data that they prefer to keep onsite or are required to do so by regulation. Maintaining data onsite can provide greater control and security than keeping it in the hands of a third party. For these organizations, private cloud is preferable to public cloud.

Some organizations require continuous data access for business operations and prefer not to risk interruption due to internet connectivity issues. Maintaining systems and data onsite allows these organizations to have more control over their business operations and maintain productivity. For these organizations, private cloud is preferable to public cloud.

Some organizations prefer the Capex model of private cloud vs. the Opex model of public cloud.  When done well, owning and managing infrastructure can be less expensive than paying someone else for hosting. The costs can be more predictable for onsite implementation, making it easier to budget. Private cloud is preferable for these organizations.

How does Hyperconvergence Fit as a Private Cloud?

For all intents and purposes, hyperconverged infrastructure (HCI) offers the same or better experience as a traditional private cloud. You could even go so far as to say it is the next generation of private cloud because it improves on some of the shortcomings of traditional private clouds. The simplicity of managing VMs in HCI is the same as traditional private clouds and brings an even simpler approach to managing the underlying hardware.

HCI is a way of combining the elements of traditional virtualization (servers, storage, and hypervisor) into a single appliance-based solution. With traditional virtualization, you were tasked with integrating these elements from multiple vendors into to working infrastructure and dealing with any incompatibilities and managing with multiple console, etc. HCI is a virtualization solution that has all of these elements pre-integrated into more or less a turnkey appliance. There should be no need to configure any storage, configure any hypervisor installs on host servers, or manage through more than a single interface.

Not all HCI vendors are equal and some rely on third party hypervisors so there are still elements of multi-vendor management, but true HCI solutions own the whole hardware and virtualization stack, providing the same experience as a private cloud. Users are able to focus on creating and managing VMs rather than worrying about the underlying infrastructure.

With the appliance-based approached, hyperconvergence is even easier to scale out than traditional private clouds or even the cloud-in-a-box solutions that also provide some levels of pre-integration. HCI scalability should be as easy as plugging in a new appliance node to a network and telling it to join an existing HCI cluster of appliance nodes.

HCI is generally more accessible and affordable than traditional private clouds or cloud-in-a-box solutions because it can start and then scale out from very small implementations without any added complexity. Small to midmarket organizations who experienced sticker shock at the acquisition and implementation costs of private clouds will likely find the costs and cost benefits of HCI much more appealing.  


Private cloud is a great idea for any organization whose goals include the control and security of onsite infrastructure and simplicity of day-to-day VM management. These organizations should be looking to hyperconverged infrastructure as a private cloud option to achieve those goals vs traditional private cloud or cloud-in-a-box options.

5 Things to Think about with Hyperconverged Infrastructure

1. Simplicity

A Hyperconverged infrastructure (or HCI) should take no more than 30 minutes to go from out of the box to creating VM’s. Likewise, an HCI should not require that the systems admin be a VCP, a CCNE, and a SNIA certified storage administrator to effectively manage it. Any properly designed HCI should be able to be administered by an average windows admin with nearly no additional training needed. It should be so easy that even a four-year-old should be able to use it…

2. VSA vs. HES

In many cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see HCI vendors choosing to simply virtualize a SAN controller into each node in their architectures and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IO’s having to pass multiple times through VMs in the system and adjacent systems. This approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) consume so much CPU and RAM that they redefine inefficient – especially in the mid-market. In one case I can think of, a VSA running on each server (or node) in a vendor’s architecture BEGINS its RAM consumption at 16GB and 8 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do. With a different vendor, the VSA reserves around 50GB RAM per node on their entry point offering, and over 100GB of RAM per node on their most common platform – a 3 node cluster reserving over 300 GB RAM just for IO path overheadAn average SMB to mid-market customer could run their entire operation in just the CPU and RAM resources these VSA’s consume.

There is a better alternative called the HES approach. It eliminates the dedicated servers, storage protocol overhead, resource consumption, multi-layer object files, filesystem nesting, and associated gear by moving the hypervisor directly into the OS of a clustered platform as a set of kernel modules with the block level storage function residing alongside the kernel in userspace, completely eliminating the SAN and storage protocols (not just virtualizing them and replicating copies of them over and over on each node in the platform). This approach simplifies the architecture dramatically while regaining the efficiency originally promised by Virtualization.

3. Stack Owners vs. Stack Dependents

Any proper HCI should not be stack dependent on another company for it’s code. To be efficient, self-aware, self-healing, and self-load balancing, the architecture needs to be holistically implemented rather than piecemealed together by using different bits from different vendors. By being a stack owner, an HCI vendor is able to do things that weren’t feasible or realistic with legacy virtualization approaches. Things like hot and rolling firmware updates at every level, 100% tested rates on firmware vs customer configurations, 100% backwards and forwards compatibility between different hardware platforms – that list goes on for quite a while.

4. Using Flash Properly Instead of as a Buffer

Several HCI vendors are using SSD and Flash only (or almost only) as a cache buffer to hide the very slow IO path’s they have chosen to build based on VSAs and Erasure Coding (formerly known as software RAID 5/6/X) used between Virtual Machines and their underlying disks – creating what amounts to a Rube Goldberg machine for an IO path (one that consumes 4 to 10 disk IO’s or more for every IO the VM needs done) rather than using Flash and SSD as proper tiers with an AI based heat mapping and QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers and dynamically allocate flash as needed on the fly to workloads that demand it (up to putting the entire workload in flash). Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of Flash or Solid State. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

5. Future Proofing Against the “Refresh All, Every 5 Years” Spiral

Proper HCI implements self-aware bi-directional live migration across dissimilar hardware. This means that the administrator is not boat anchored to a technology “point in time of acquisition”, but rather, they can avoid over buying on the front end, and take full advantage of Moore’s law and technical advances as they come and the need arises. As lower latency and higher performance technology comes to the masses, attaching it to an efficient software stack is crucial in eliminating the need the “throw away and start over ” refresh cycle every few years.

*Bonus* 6. Price 

Hyperconvergence shouldn’t come at a 1600+% price premium over the cost of the hardware it runs on. Hyperconvergence should be affordable – more so than the legacy approach was and VSA based approach is by far.

These are just a few points to keep in mind as you investigate which Hyperconverged platform is right for your needs

Behind the Scenes: Architecting HC3

Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

The Whole Enchilada

With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.


At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.


Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.


Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.


By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.


Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

Groundhog Day

Today is Groundhog Day, a holiday celebrated in the United States and Canada where the length of the remaining winter season is predicted by a rodent. According to folklore, if it is cloudy when a groundhog emerges from its burrow on this day, then the spring season will arrive early, some time before the vernal equinox; if it is sunny, the groundhog will supposedly see its shadow and retreat back into its den, and winter weather will persist for six more weeks. (Wikipedia)

Today the groundhog, Punxsutawney Phil, saw his shadow. Thanks, Phil.

alt text

Groundhog Day is also the name of a well-loved film starring Bill Murray (seen above) where his character, Phil, is trapped in some kind of temporal loop repeating the same day over and over. I won’t give the rest away for anyone who has not seen the movie, but it got me thinking. What kind of day would you rather have to live over and over as an IT professional? I’m guessing it does not include the following:

  • Manually performing firmware and software updates to your storage system, server hardware, hypervisor, HA/DR solution, or management tools.
  • Finding out one of your solution vendor updates broke a different vendor’s solution.
  • Having to deal with multiple vendor support departments to troubleshoot an issue none of them will claim responsibility for.
  • Dealing with downtime caused by a hardware failure.
  • Having to recover a server workload from tape or cloud backup.
  • Having to deal with VMware licensing renewals.
  • Thanklessly working all night to fix an issue and only receiving complaints about more downtime.

These are all days none of us want to live through even once, right? But of course, many IT professionals do find themselves reliving these days over and over again because they are still using the same old traditional IT infrastructure architecture that combines a number of different solutions into a fragile and complex mess.

At Scale Computing we are trying to break some of these old cycles with simplicity, scalability, and affordability. We believe, and our customers believe, that infrastructure should be less of a management and maintenance burden in IT. I encourage you to see for yourself how our HC3 virtualization platform has transformed IT with both video and written case studies here.

We may be in for six more weeks of winter but we don’t need to keep repeating some of the same awful days we’ve lived before as IT professionals. Happy Groundhog Day!

5 Things You Might Not Know About HCI

Hyperconverged Infrastructure (HCI) is still an emerging technology and there are a variety of approaches vendors are taking. For many IT professionals, there is still an air of mystery and misconception around HCI. Below are 5 things you might not know about the current state of HCI.

The Meaning of Hyper

The “hyper” in hyperconverged means hypervisor. The term hyperconvergence was intended to refer to solutions that included a virtualization hypervisor in addition to the combination of server and storage, often referred to as converged infrastructure. Well, since hyperconverged sounds so much cooler than converged, every converged infrastructure vendor, whether they included their own hypervisor or not, started adopting hyperconverged to refer to their solution. Many of these solutions still rely on third party hypervisors and don’t really meet the hypervisor criteria for hyperconverged infrastructure.

Is it More Efficient?

Is HCI more efficient than traditional infrastructure? It depends on the vendor. For example, some HCI vendors are still using the same inefficient virtual storage appliance (VSA) models that became popular in adapting traditional SANs to virtualization. These VSAs are notorious resource hogs often consuming large amounts of RAM and CPU that would otherwise be available for application VMs. Other vendors have brought real innovation to HCI to build new storage architecture that is designed specifically to deliver storage efficiently to the hypervisor and make the most efficient use of the hardware.

Improved Data Protection?

While there haven’t yet been any studies specifically on HCI vendor solutions for data protection, a study by EMC (view here) found that more vendors involved in data protection resulted in more data loss. Many HCI solutions include comprehensive backup, replication, and disaster recovery tools to protect data, so they are a single vendor for data protection. The HCI architecture lends itself to better data protection by virtue of converging so many solution components in one.

Is HCI Less or More Expensive than Traditional Infrastructure?

The cost of acquisition can be high with HCI and the price varies from vendor to vendor, but there is a premium added to the traditional hardware cost for the software components and convenience of the various solutions being converged. Even if the price tag is higher than a traditional infrastructure, looking at operational expenses reveals the true savings. First, it saves enormously on implementation and scaling costs, since most of the traditional integration has already been done within the architecture. Then, depending on the vendor solution to varying degrees, management and maintenance costs are reduced over the lifecycle of the solution. The ROI of an HCI solution will usually be dramatically higher than a traditional server/SAN/hypervisor solution.

You have Flexibility to Scale Capacity

This does vary by vendor, but some vendors provide the ability to customize each appliance in an HCI cluster.  With these vendors, when you add a new node, you can build it for the resources you need. The most common need is to increase cluster storage capacity, so you can bring in a new node with high storage capacity but with lower RAM and CPU to fulfill that need. With vendors that support this flexibility, you can customize resource capacity each time to add in a new cluster node.


HCI is a solution that should be looked at more closely for the value proposition it represents as a replacement for traditional IT infrastructure. Mystery and misconception can always be eliminated with research and dialogue with the vendors that are pushing forward HCI as a real, next-generation infrastructure.

How Important is DR Planning?

Disaster Recovery (DR) is a crucial part of IT architecture but it is often misunderstood, clumsily deployed, and then neglected. It is often unclear whether the implemented DR tools and plan will actually meet SLAs when needed. Unfortunately it often isn’t until a disaster has occurred that an organization realizes that their DR strategy has failed them. Even when organizations are able to successfully muddle through a disaster event, they often discover they never planned for failback to their primary datacenter environment.

Proper planning can ensure success and eliminate uncertainty, beginning before implementation and then enabling continued testing and validation of the DR strategy, all the way through disaster events. Planning DR involves much more than just identifying workloads to protect and defining backup schedules. A good DR strategy include tasks such as capacity planning, identifying workload dependencies, defining workload protection methodology and prioritization, defining recovery runbooks, planning user connectivity, defining testing methodologies and testing schedules, and defining a failback plan.

At Scale Computing, we take DR seriously and build in DR capabilities such as backup, replication, failover, and failback to our HC3 hyperconverged infrastructure.  In addition to providing the tools you need in our solution, we also offer our DR Planning Service to help you be completely successful in planning, implementing, and maintaining your DR strategy.

Our DR Planning Service, performed by our expert ScaleCare support engineers, provides a complete disaster recovery run-book as an end-to-end DR plan for your business needs. Whether you have already decided to implement DR to your own DR site or utilize our ScareCare Remote Recovery Service in our hosted datacenter, our engineers can help you with all aspects of the DR strategy.

The service also includes the following components:

  • Setup and configuration of clusters for replication
  • Completion of Disaster Recovery Run-Book (disaster recovery plan)
  • Best-practice review
  • Failover and failback demonstration
  • Assistance in facilitating a DR test

You can view a recording of our recent webinar on DR planning here.

Please let us know how we can help you with DR planning on your HC3 system by contacting ScaleCare support at 877-SCALE-59 or support@scalecomputing.com.