Tag Archives: storage

What’s New – August 2017 Edition

Last week, we held a webinar to talk about what is new in HC3 these days. We are doing these “What’s New” webinars every six months so if you want to be informed and have the chance to ask questions to our product management team, you should attend. Today I am going to summarize some of the topics from last week’s webinar for you.

If you have been following along closely, you’ll have noticed we’ve been leaking mentions of our newest HC3 models in previous webinars.

HC1150DF

The first new model we have available is the HC1150DF, our first ALL-FLASH appliance. If your applications need screaming fast performance, you can’t get much faster than the HC1150DF.  Like the HC1150D, the HC1150DF has dual processors for higher performance computing. The HC1150DF can be mixed and matched with existing HC3 product lines which allows users to dial in the exact amount of flash needed in their environment.  See the latest support matrix for specifics.

HC5150D

The new HC5150D is a storage-heavy HC3 appliance with 12 drives including 3 SSDs and 9 NL-SAS drives for 3X the storage capacity of the HC1150s. It is a dual processor appliance with plenty of storage capacity to pack in the VMs. The HC5150D can be mixed and matched with other HC3 appliances including the new HC1150DF (see the latest support matrix for specifics).  

There are the baseline specs and U.S. pricing. Regional pricing is available upon request. Click the image to make it a bit bigger and easier to read.

Along with these new models comes the new HyperCore version 7.3 with new features and functionality.

Storage Deduplication and Improved Detail

HyperCore 7.3 added storage deduplication to reduce the storage footprint of data stored on virtual machines. Virtual disks are deduplicated, post-process to eliminate duplicate data blocks and free up storage with minimal impact to running VMs. With deduplication, disks can hold considerably more data than previously allowed within the same physical disk capacity. Along with the deduplication, the storage details available in the HC3 Web interface have significantly improved with more information on utilization and efficiency.

Multi-Cluster Remote Management

With HyperCore 7.3, we’ve made available the ability to monitor multiple clusters within the HC3 Web Interface. The intuitive design shows the status of multiple clusters that can be local or remote to keep tabs on your entire enterprise of HC3 nodes and clusters. Whether they are single nodes in remote offices or DR targets, or multi-node clusters, the new multi-cluster view provides at-a-glance monitoring of all your HC3 assets.

Multi-User Administration and Logging

With HyperCore 7.3, we’ve introduced multi-user login and administration so you can better manage your multiple admins.  Multiple users may login with their own credentials to perform their own administrative functions. Logging keeps track of administrator access to assist in management and troubleshooting.

Important note: HyperCore version 7.3 is not yet generally available, but is in restricted availability and will be made available to other HC3 users later this year.

Those were the new models and features in a nutshell. If you would like more information about what is new with HC3, you can use the links below to get access to a recording of last week’s webinar and our What’s New guide.

August 3rd Webinar Recording

What’s New in HC3 Guide

 

Scale Computing Keeps Storage Simple and Efficient

Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.

At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex.  In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?

Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.

Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs.  These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.

VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads.  To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.

The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability.  This pooling also allows for complete flexibility of storage usage across all nodes.  The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.

To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O.  By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.

We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest.  We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.

Screenshot 2016-04-19 13.07.06

The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.

Screenshot 2016-07-13 09.34.07

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

HC3 Under The Hood: Integrated Scale-Out Storage Pool – Data Mirroring and Striping

In this post, we will dive into the HC3 distributed storage layer and detail how data is stored across all the physical disks in the cluster, providing data redundancy should a disk drive fail and aggregating the I/O performance of all the drives in the system.

HC3 treats all storage in the cluster as a single logical pool for management and scalability purposes, the real magic comes in how data blocks are stored redundantly across the cluster to maximize availability as well as performance. Continue reading

VMware is Dead

We recently presented at an analyst-centric conference in which the lead-in to our presentation was “VMware is dead. Storage is dead.”  We certainly drew some inquisitive looks from the audience. But as we explained HC3 and the underlying technology, the puzzled looks turned into nods of agreement.

Some of the latest buzz has centered around the “software-defined datacenter” which is an extension of software-defined networking that has made its way into software-defined storage and software-defined servers – all three of which culminate in the software-defined datacenter.  In the end, it’s all about the promise of making infrastructure easy to deploy and manage. Continue reading