All posts by David Paquette

Behind the Scenes: Architecting HC3

Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

The Whole Enchilada

With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.

Storage

At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.

Hypervisor

Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.

Backup/DR

Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.

Management

By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.

Summary

Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

Groundhog Day

Today is Groundhog Day, a holiday celebrated in the United States and Canada where the length of the remaining winter season is predicted by a rodent. According to folklore, if it is cloudy when a groundhog emerges from its burrow on this day, then the spring season will arrive early, some time before the vernal equinox; if it is sunny, the groundhog will supposedly see its shadow and retreat back into its den, and winter weather will persist for six more weeks. (Wikipedia)

Today the groundhog, Punxsutawney Phil, saw his shadow. Thanks, Phil.

alt text

Groundhog Day is also the name of a well-loved film starring Bill Murray (seen above) where his character, Phil, is trapped in some kind of temporal loop repeating the same day over and over. I won’t give the rest away for anyone who has not seen the movie, but it got me thinking. What kind of day would you rather have to live over and over as an IT professional? I’m guessing it does not include the following:

  • Manually performing firmware and software updates to your storage system, server hardware, hypervisor, HA/DR solution, or management tools.
  • Finding out one of your solution vendor updates broke a different vendor’s solution.
  • Having to deal with multiple vendor support departments to troubleshoot an issue none of them will claim responsibility for.
  • Dealing with downtime caused by a hardware failure.
  • Having to recover a server workload from tape or cloud backup.
  • Having to deal with VMware licensing renewals.
  • Thanklessly working all night to fix an issue and only receiving complaints about more downtime.

These are all days none of us want to live through even once, right? But of course, many IT professionals do find themselves reliving these days over and over again because they are still using the same old traditional IT infrastructure architecture that combines a number of different solutions into a fragile and complex mess.

At Scale Computing we are trying to break some of these old cycles with simplicity, scalability, and affordability. We believe, and our customers believe, that infrastructure should be less of a management and maintenance burden in IT. I encourage you to see for yourself how our HC3 virtualization platform has transformed IT with both video and written case studies here.

We may be in for six more weeks of winter but we don’t need to keep repeating some of the same awful days we’ve lived before as IT professionals. Happy Groundhog Day!

5 Things You Might Not Know About HCI

Hyperconverged Infrastructure (HCI) is still an emerging technology and there are a variety of approaches vendors are taking. For many IT professionals, there is still an air of mystery and misconception around HCI. Below are 5 things you might not know about the current state of HCI.

The Meaning of Hyper

The “hyper” in hyperconverged means hypervisor. The term hyperconvergence was intended to refer to solutions that included a virtualization hypervisor in addition to the combination of server and storage, often referred to as converged infrastructure. Well, since hyperconverged sounds so much cooler than converged, every converged infrastructure vendor, whether they included their own hypervisor or not, started adopting hyperconverged to refer to their solution. Many of these solutions still rely on third party hypervisors and don’t really meet the hypervisor criteria for hyperconverged infrastructure.

Is it More Efficient?

Is HCI more efficient than traditional infrastructure? It depends on the vendor. For example, some HCI vendors are still using the same inefficient virtual storage appliance (VSA) models that became popular in adapting traditional SANs to virtualization. These VSAs are notorious resource hogs often consuming large amounts of RAM and CPU that would otherwise be available for application VMs. Other vendors have brought real innovation to HCI to build new storage architecture that is designed specifically to deliver storage efficiently to the hypervisor and make the most efficient use of the hardware.

Improved Data Protection?

While there haven’t yet been any studies specifically on HCI vendor solutions for data protection, a study by EMC (view here) found that more vendors involved in data protection resulted in more data loss. Many HCI solutions include comprehensive backup, replication, and disaster recovery tools to protect data, so they are a single vendor for data protection. The HCI architecture lends itself to better data protection by virtue of converging so many solution components in one.

Is HCI Less or More Expensive than Traditional Infrastructure?

The cost of acquisition can be high with HCI and the price varies from vendor to vendor, but there is a premium added to the traditional hardware cost for the software components and convenience of the various solutions being converged. Even if the price tag is higher than a traditional infrastructure, looking at operational expenses reveals the true savings. First, it saves enormously on implementation and scaling costs, since most of the traditional integration has already been done within the architecture. Then, depending on the vendor solution to varying degrees, management and maintenance costs are reduced over the lifecycle of the solution. The ROI of an HCI solution will usually be dramatically higher than a traditional server/SAN/hypervisor solution.

You have Flexibility to Scale Capacity

This does vary by vendor, but some vendors provide the ability to customize each appliance in an HCI cluster.  With these vendors, when you add a new node, you can build it for the resources you need. The most common need is to increase cluster storage capacity, so you can bring in a new node with high storage capacity but with lower RAM and CPU to fulfill that need. With vendors that support this flexibility, you can customize resource capacity each time to add in a new cluster node.

Summary

HCI is a solution that should be looked at more closely for the value proposition it represents as a replacement for traditional IT infrastructure. Mystery and misconception can always be eliminated with research and dialogue with the vendors that are pushing forward HCI as a real, next-generation infrastructure.

How Important is DR Planning?

Disaster Recovery (DR) is a crucial part of IT architecture but it is often misunderstood, clumsily deployed, and then neglected. It is often unclear whether the implemented DR tools and plan will actually meet SLAs when needed. Unfortunately it often isn’t until a disaster has occurred that an organization realizes that their DR strategy has failed them. Even when organizations are able to successfully muddle through a disaster event, they often discover they never planned for failback to their primary datacenter environment.

Proper planning can ensure success and eliminate uncertainty, beginning before implementation and then enabling continued testing and validation of the DR strategy, all the way through disaster events. Planning DR involves much more than just identifying workloads to protect and defining backup schedules. A good DR strategy include tasks such as capacity planning, identifying workload dependencies, defining workload protection methodology and prioritization, defining recovery runbooks, planning user connectivity, defining testing methodologies and testing schedules, and defining a failback plan.

At Scale Computing, we take DR seriously and build in DR capabilities such as backup, replication, failover, and failback to our HC3 hyperconverged infrastructure.  In addition to providing the tools you need in our solution, we also offer our DR Planning Service to help you be completely successful in planning, implementing, and maintaining your DR strategy.

Our DR Planning Service, performed by our expert ScaleCare support engineers, provides a complete disaster recovery run-book as an end-to-end DR plan for your business needs. Whether you have already decided to implement DR to your own DR site or utilize our ScareCare Remote Recovery Service in our hosted datacenter, our engineers can help you with all aspects of the DR strategy.

The service also includes the following components:

  • Setup and configuration of clusters for replication
  • Completion of Disaster Recovery Run-Book (disaster recovery plan)
  • Best-practice review
  • Failover and failback demonstration
  • Assistance in facilitating a DR test

You can view a recording of our recent webinar on DR planning here.

Please let us know how we can help you with DR planning on your HC3 system by contacting ScaleCare support at 877-SCALE-59 or support@scalecomputing.com.

HC3 VM File Level Recovery with Video

Many of you have asked us recently about individual file recovery with HC3 and we’ve put together some great resources on how it works. We realize file recovery is an important part of IT operations. It is often referred to as operational recovery instead of disaster recovery, because the loss of a single file is not necessarily a disaster. It is an important part of IT and an important function we are able to highlight with HC3.

First off, we have a great video demo by our Pontiff of Product Management, Craig Theriac.

Additionally, we have a comprehensive guide for performing file level recovery on HC3 from our expert ScaleCare support team. This document, titled “Windows Recovery ISO”, explains every detail of the process from beginning to end. To summarize briefly, the process involves using a recovery ISO to recover files from a VM clone taken from a known good snapshot. As you can see in the video above, the process can be done very quickly, in just a matter of minutes.

(Click here for full document.)

Full disclosure: We know you’d prefer to have a more integrated process that is built into HC3, and we will certainly be working to improve this functionality with that in mind. Still, I think our team has done a great job providing these new resources and I think you’ll find them very helpful in using HC3 to its fullest capacity. Happy Scaling!

New! – Premium Installation Service

2017 is here. We want to help you start your new year and your new HC3 system with our new ScaleCare Premium Installation service. You’ve probably already heard about how easy HC3 is to install and manage, and you might be asking why you would even need this service. The truth is that you want your install to go seamlessly and to have full working knowledge of your HC3 system right out of the gate, and that is what this service is all about.

First, this premium installation service assists you with every aspect of installation starting with planning, prerequisites, virtual and physical networking configuration, and priority scheduling. You get help even before you unbox your HC3 system to prepare for a worry-free install. The priority scheduling helps you plan your install around your own schedule, which we know can be both busy and complex.

Secondly, ScaleCare Premium Installation includes remote installation with a ScaleCare Technical Support Engineer. This remote install includes a UI overview and setup assistance and if applicable, a walkthrough of HC3 Move software for workload migrations to HC3 of any physical or virtual servers. Remote installation means a ScaleCare engineer is with you every step of the way as you install and configure your HC3 system.

Finally, ScaleCare Premium Installation includes deep dive training of everything HC3 with a dedicated ScaleCare Technical Support Engineer. This training, which normally takes around 4 hours to complete, will make you an HC3 expert on everything from virtualization, networking, backup/DR, to our patented SCRIBE storage system. You’ll basically be a PHD of HC3 by the time you are done with the install.

Here is the list of everything included:

  • Requirements and Planning Pre-Installation Call
  • Virtual and Physical Networking Planning and Deployment Assistance
  • Priority Scheduling for Installations
  • Remote Installation with a ScaleCare Technical Support Engineer
  • UI Overview and Setup Assistance
  • Walkthrough of HC3 Move software for migrations to HC3 of a Windows physical or virtual server
  • Training with a dedicated ScaleCare Technical Support Engineer
    • HC3 and Scribe Overview
    • HC3 Configuration Deep Dive
    • Virtualization Best Practices
    • Networking Best Practices
    • Backup / DR Best Practices

Yes, it is still just as easy to use and simple to deploy as ever, but giving yourself a head start in mastering this technology seems like a no-brainer. To find out more about how to get ScaleCare Premium Installation added to your HC3 order, contact your Scale Computing representative. We look forward to providing you with this service!

Scale Computing – A Year in Review 2016

It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.

“And the award goes to…”

Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.

Scenes from the 2016 Midsize Enterprise Summit

 

News Flash!

2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage  has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.

Newer, Stronger, Faster

When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.

Going Solo

In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.

 

Cloud-based DR? Check

2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.

Better Together

2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.

The Doctor is In

It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017.  Keep checking our blog for my latest posts.  

Just me, Dr. P

 

Sincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.

Happy New Year!

Scale with Increased Capacity

2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.

First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.

Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.

Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.

Screenshot 2016-12-15 09.07.43

Scale Computing is committed to giving our customers the best virtualization infrastructure on the market and we will keep integrating greater capacity and computing power into our HC3 appliances. Our focus of simplicity, scalability, and availability will continue to drive our innovation to make IT infrastructure more affordable for you. Look for more announcements to come.

Screenshot 2016-07-13 09.34.07

3-Node Minimum? Not So Fast

For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration.  Why now?

Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

Screen Shot 2016-07-18 at 2.06.52 PM

In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

Replication

Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

Screenshot 2016-07-13 09.34.07

What do DDOS Attacks Mean for Cloud Users?

Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

level3_outage_oct2016_downdetector_800b

This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime.  What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

Screenshot 2016-07-13 09.34.07

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×