×
×

HC3 VM File Level Recovery with Video

Many of you have asked us recently about individual file recovery with HC3 and we’ve put together some great resources on how it works. We realize file recovery is an important part of IT operations. It is often referred to as operational recovery instead of disaster recovery, because the loss of a single file is not necessarily a disaster. It is an important part of IT and an important function we are able to highlight with HC3.

First off, we have a great video demo by our Pontiff of Product Management, Craig Theriac.

Additionally, we have a comprehensive guide for performing file level recovery on HC3 from our expert ScaleCare support team. This document, titled “Windows Recovery ISO”, explains every detail of the process from beginning to end. To summarize briefly, the process involves using a recovery ISO to recover files from a VM clone taken from a known good snapshot. As you can see in the video above, the process can be done very quickly, in just a matter of minutes.

(Click here for full document.)

Full disclosure: We know you’d prefer to have a more integrated process that is built into HC3, and we will certainly be working to improve this functionality with that in mind. Still, I think our team has done a great job providing these new resources and I think you’ll find them very helpful in using HC3 to its fullest capacity. Happy Scaling!

New! – Premium Installation Service

2017 is here. We want to help you start your new year and your new HC3 system with our new ScaleCare Premium Installation service. You’ve probably already heard about how easy HC3 is to install and manage, and you might be asking why you would even need this service. The truth is that you want your install to go seamlessly and to have full working knowledge of your HC3 system right out of the gate, and that is what this service is all about.

First, this premium installation service assists you with every aspect of installation starting with planning, prerequisites, virtual and physical networking configuration, and priority scheduling. You get help even before you unbox your HC3 system to prepare for a worry-free install. The priority scheduling helps you plan your install around your own schedule, which we know can be both busy and complex.

Secondly, ScaleCare Premium Installation includes remote installation with a ScaleCare Technical Support Engineer. This remote install includes a UI overview and setup assistance and if applicable, a walkthrough of HC3 Move software for workload migrations to HC3 of any physical or virtual servers. Remote installation means a ScaleCare engineer is with you every step of the way as you install and configure your HC3 system.

Finally, ScaleCare Premium Installation includes deep dive training of everything HC3 with a dedicated ScaleCare Technical Support Engineer. This training, which normally takes around 4 hours to complete, will make you an HC3 expert on everything from virtualization, networking, backup/DR, to our patented SCRIBE storage system. You’ll basically be a PHD of HC3 by the time you are done with the install.

Here is the list of everything included:

  • Requirements and Planning Pre-Installation Call
  • Virtual and Physical Networking Planning and Deployment Assistance
  • Priority Scheduling for Installations
  • Remote Installation with a ScaleCare Technical Support Engineer
  • UI Overview and Setup Assistance
  • Walkthrough of HC3 Move software for migrations to HC3 of a Windows physical or virtual server
  • Training with a dedicated ScaleCare Technical Support Engineer
    • HC3 and Scribe Overview
    • HC3 Configuration Deep Dive
    • Virtualization Best Practices
    • Networking Best Practices
    • Backup / DR Best Practices

Yes, it is still just as easy to use and simple to deploy as ever, but giving yourself a head start in mastering this technology seems like a no-brainer. To find out more about how to get ScaleCare Premium Installation added to your HC3 order, contact your Scale Computing representative. We look forward to providing you with this service!

The Origin of Modern Hyperconvergence

Several years ago (in the waning days of the last decade and early days of this one), we here at Scale decided to revolutionize how datacenters for the SMB and Mid Market should function. In the spirit of “perfection is not attained when there is nothing left to add, but rather when there is nothing left to remove”, we set out to take a clean sheet of paper approach to how highly available virtualization SHOULD work. We started by asking a simple question – If you were to design, from the ground up, a virtual infrastructure, would it look even remotely like the servers plus switches plus SAN plus hypervisor plus management beast known as the inverted pyramid of doom? The answer, of course, was no, it would not. In that legacy approach, each piece exists as an answer/band-aid/patch to the problems inherent in the previous iteration of virtualization, resulting in a Rube-Goldbergian machine of cost and complexity that took inefficiency to an entirely new level.

There had to be a better way. What if we were to eliminate the SAN entirely, but maintain the flexibility it provided in the first place (enabling high availability)? What if we were to eliminate the management servers entirely by making the servers (or nodes) talk directly to each other? What if we were to base the entire concept around a self aware, self healing, self load balancing cluster of commodity X64 server nodes? What if we were to take the resource and efficiency gains made in this approach and put them directly into running workloads instead of overhead thereby significantly improving density while lowering cost dramatically? We sharpened our pencils and got to work. The end result was our HC3 platform.

Now, at this same time, a few other companies were working on things that were superficially similar, but designed to tackle an entirely different problem. These other companies set out to be a “better delivery mechanism for VMWare in the large enterprise environment”. They did this by taking the legacy solution SAN component and virtualizing an instance of SAN (storage protocols, CPU and RAM resource consumption and all) as a virtual machine running on each and every server in their environment. The name they used for this across the industry was “Server SAN”.

Server SAN, while an improvement in some ways over the legacy approach to virtualization, was hardly what we here at Scale had created. What we had done was the elimination of all those pieces of overhead. We had actually converged the entire environment by collapsing those old legacy stacks (not virtualizing them and replicating them over and over). Server San just didn’t describe what we do. In an effort to create a proper name for what we had created, we took some of our early HC3 Clusters to Arun Taneja and the Taneja group back in 2011 and walked them through our technology. After many hours in that meeting with their team and ours, the old networking term “Hyperconverged” was resurrected specifically to describe Scale’s HC3 platform – the actual convergence of all of the stacks (storage, compute, virtualization, orchestration, self-healing, management, et.al.) and elimination of everything that didn’t need to be there in the legacy approach to virtualization, rather than the semi-converged approach that the Server San vendors had taken.

Like everything else in this business, the term caught fire, and it’s actual meaning became obscured through it’s being co-opted by a multiplicity of other vendors stretching it to fit their products – I am fairly sure I saw a “hyperconverged” coffee maker the other week, but now you know where the term actually came from and what it really means from the people that coined it’s modern use in the first place

Scale Computing – A Year in Review 2016

It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.

“And the award goes to…”

Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.

Scenes from the 2016 Midsize Enterprise Summit

 

News Flash!

2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage  has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.

Newer, Stronger, Faster

When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.

Going Solo

In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.

 

Cloud-based DR? Check

2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.

Better Together

2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.

The Doctor is In

It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017.  Keep checking our blog for my latest posts.  

Just me, Dr. P

 

Sincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.

Happy New Year!

Scale with Increased Capacity

2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.

First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.

Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.

Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.

Screenshot 2016-12-15 09.07.43

Scale Computing is committed to giving our customers the best virtualization infrastructure on the market and we will keep integrating greater capacity and computing power into our HC3 appliances. Our focus of simplicity, scalability, and availability will continue to drive our innovation to make IT infrastructure more affordable for you. Look for more announcements to come.

Screenshot 2016-07-13 09.34.07

3-Node Minimum? Not So Fast

For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration.  Why now?

Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

Screen Shot 2016-07-18 at 2.06.52 PM

In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

Replication

Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

Screenshot 2016-07-13 09.34.07

What do DDOS Attacks Mean for Cloud Users?

Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

level3_outage_oct2016_downdetector_800b

This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime.  What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

Screenshot 2016-07-13 09.34.07

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

Scale Computing Keeps Storage Simple and Efficient

Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.

At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex.  In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?

Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.

Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs.  These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.

VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads.  To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.

The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability.  This pooling also allows for complete flexibility of storage usage across all nodes.  The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.

To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O.  By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.

We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest.  We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.

Screenshot 2016-04-19 13.07.06

The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.

Screenshot 2016-07-13 09.34.07

VDI with Workspot

One of the questions we often get for our HC3 platform is, “Can it be used for virtual desktop infrastructure (VDI)?” Yes, of course, it can. In addition to solutions we support like RDS or Citrix, we are very excited about our partnership with Workspot and their VDI 2.0 solution.  But first, I want to explain a bit about why we think VDI on HC3 makes so much sense.

VDI greatly benefits from simplicity in infrastructure. The idea behind VDI is to reduce both infrastructure management and cost by moving workloads from the front end to the bank end infrastructure. This makes it much easier to control resource utilization and manage images. HC3 provides that simple infrastructure that from box to running VMs takes less than an hour. Also, the entire firmware and software including hypervisor can be updated or scaled out with additional capacity without downtime. Your desktops will never be as highly available as on HC3. Simple, scalable, and available are the ideas HC3 is built on.

So why Workspot on HC3?  Workspot brought together some of the original creators of VDI to reinvent it as a next generation solution.The CTO of Workspot is one of the founding engineers to code the VMware View VDI product! What makes it innovative though? By leveraging cloud management infrastructure, Workspot simplifies VDI management for the IT generalist while supporting BYOD for the modern workplace. Workspot on HC3 can be deployed in under an hour, making it possible to deploy a full VDI solution in less than a day.

workspot

We did validation testing with Workspot on HC3and were able to run 175 desktop VMs on a 3-node HC1150 cluster using LoginVSI as a benchmark for performance. We also validated an 3-node HC4150 cluster with 360 desktops with similar results. You can see a more detailed description of the reference architecture here. By adding more nodes, and even additional clusters, the capacity can be expanded almost infinitely but more importantly, just as much as you need, when you need it. We think these results speak for themselves as positioning this solution as a perfect fit for the midmarket, where HC3 already shines.

Maybe you’ve been considering VDI but have been hesitant because of the added complexity of having to create even more traditional virtualization infrastructure in your datacenter.  It doesn’t have to be that way.  Workspot and Scale Computing are both in the business of reducing complexity and cost to make these solutions more accessible and more affordable.  Just take a look and you’ll see why we continue to do things differently than everyone else.

Click here for the press release.

Screenshot 2016-07-13 09.34.07