All posts by David Paquette

Turning Hyperconvergence up to 11

People seem to be asking me a lot lately about incorporating flash into their storage architecture.  You probably already know that flash storage is still a lot more expensive than spinning disks.  You are probably not going to need flash I/O performance for all of your workloads nor do you need to pay for all flash storage systems. That is where hybrid storage comes in.

Hybrid storage solutions featuring a combination of solid state drives and spinning disks are not new to the market, but because of the cost per GB of flash compared to spinning disk, the adoption and accessibility for most workloads is low. Small and midsize business, in particular, may not know if implementing a hybrid storage solution is right for them.

Hyperconverged infrastructure also provides the best of both worlds in terms of combining virtualization with storage and compute resources. How hyperconvergence is defined as an architecture is still up for debate and you will see various implementations with more traditional storage, and those with truly integrated storage. Either way, hyperconverged infrastructure has begun making flash storage more ubiquitous throughout the datacenter and HC3 hyperconverged clustering from Scale Computing is now making it even more accessible with our HEAT technology.

HEAT is HyperCore Enhanced Automated Tiering, the latest addition to the HyperCore hyperconvergence architecture. HEAT combines intelligent I/O mapping with the redundant, wide-striping storage pool in HyperCore to provide high levels of I/O performance, redundancy, and resiliency across both spinning and solid state disks. Individual virtual disks in the storage pool can be tuned for relative flash prioritization to optimize the data workloads on those disks. The intelligent HEAT I/O mapping makes the most efficient use of flash storage for the virtual disk following the guidelines of the flash prioritization configured by the administrator on a scale of 0-11. You read that right. Our flash prioritization goes to 11.

Screenshot 2016-04-19 13.07.06

HyperCore gives you high performing storage on both spinning disk only or hybrid tiered storage because it is designed to let each virtual disk take advantage of the speed and capacity of the whole storage infrastructure. The more resources that are added to the clusters, the better the performance. HEAT takes that performance to the next level by giving you fine tuning options for not only every workload, but every virtual disk in your cluster. Oh, and I should have mentioned it comes at a lower prices than other hyperconverged solutions.

Watch this short video demo of HC3 HEAT:

If you still don’t know whether you need to start taking advantage of flash storage for your workloads, Scale Computing can help with free capacity planning tools to see if your I/O needs require flash or whether spinning disks still suffice under advanced, software-defined storage pooling. That is one of the advantages of a hyperconvergence solution like HC3; the guys at Scale Computing have already validated the infrastructure and provide the expertise and guidance you need.

New and Improved! – Snapshot Scheduling

Scale Computing is rolling out a new scheduling mechanism for HC3 VM snapshots and it is something to take note of.  Scheduling snapshots is nothing new but it is often a burdensome task to either create custom snapshot schedules for VMs or alternately be restricted by canned schedules that don’t really fit.  Luckily Scale created a scheduling mechanism that is extremely flexible and easy to use at the same time and I can honestly say I love this new feature.

The first thing I love about the new snapshot scheduling is that all schedules are template-based, meaning once you create a schedule, it can quickly and easily be applied to other VMs.  You don’t have to waste time recreating the schedule on each VM if one schedule will work for many VMs.  Just create the schedule once and apply at will.  The schedules are defined by both the snapshot intervals and the retention period for those scheduled snapshots.

The second thing I love about the snapshot scheduling in HC3 is that you can build the schedule with multiple simple recurrence rules.  This might sound like an unnecessary redundancy but what it provides is the ability to mix and match various rule formulas without making the rules overly complex.  You can add as many or as few rules to a schedule as needed to meet SLAs.

For example, you might want a snapshot every 2 hours for 24 hours and also a snapshot every day for 7 days.  Instead of mashing these together into a singularly confusing rule, they exist as two simple rules: A: Every 2 hours for 24 hours and B: Every 1 day for 7 days.  The granularity for the scheduling rules ranges from minutes to months to provide maximum flexibility when defining schedules to meet any needs.

What makes this scheduling feature even more useful is that it is directly tied into replication between clusters.  Replication between HC3 clusters is snapshot-based and snapshot schedules determine when data is replicated.  Replication can be scheduled as often as every 5 minutes or to whatever other schedule meets your disaster recovery SLAs for cluster-to-cluster failover.  This gives you nearly unlimited options for sizing replication traffic and frequency to meet your needs.

Watch this short video to see snapshot scheduling in action.

With the flexibility of this new scheduling mechanism, you will most likely be managing snapshots with just a short list of simple schedules that you can apply to your VMs quickly and quietly with no disruption to VM availability.  Snapshot scheduling is available now so check with ScaleCare support engineers for the update.

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×