All posts by David Paquette

4 Things You Lose with Scale Computing HC3

Choosing to convert to hyperconvergence is a big decision and it is important to carefully consider the implications.  For a small or midsize datacenter, these considerations are even more critical.  Here are 4 important things that you lose when switching to Scale Computing HC3 hyperconvergence.

1.  Management Consoles

When you implement an HC3 cluster, you no longer have multiple consoles to manage separate server, storage, and virtualization solutions. You are reduced to a single console from which to manage the infrastructure and perform all virtualization tasks, and only one view to see all cluster nodes, VMs, and storage and compute resources. Only one console! Can you even imagine not having to manage storage subsystems in a separate console to make the whole thing work? (Note: You may also begin losing vendor specific knowledge of storage subsystems as all storage is managed as a single storage pool alongside the hypervisor.)

2. Nights and Weekends in the Datacenter

Those many nights and weekends you’ve become accustomed to working, spent performing firmware, software, or even hardware updates to your infrastructure, will be lost. You don’t have to take workloads offline with HC3 to perform infrastructure updates so you will just do these during regular hours. No more endless cups of coffee along with the whir of cooling fans to keep you awake on those late nights in the server rooms. Your relationship with the nightly cleaning staff at the office will undoubtedly suffer unless you can find application layer projects to replace the nights and weekends you used to spend on infrastructure.

3. Hypervisor Licensing

You’ll no doubt feel this loss even during the evaluation and purchasing of a new HC3 cluster.  There just isn’t any hypervisor licensing to be found because the entire hypervisor stack is included without any 3rd party licensing required.  There are no license keys, nor licensing details, nor licensing prices or options.  The hypervisor is just there.  Some of the other hyperconvergence vendors provide hypervisor licensing but it just won’t be found at Scale Computing.

4. Support Engineers

You’ve spent many hours developing close relationships with a circle of support engineers from your various server, storage, and hypervisor vendors over months and years but those relationships simply can’t continue.  No, you will only be contacting Scale Computing for all of your server, storage, virtualization, and even DR needs.  You’ll no doubt miss the many calls and hours of finger pointing spent with your former vendor support engineers to troubleshoot even the simplest issues.

Change invariably leads to loss.  We all deal with loss of datacenter complexity in our own way, but be assured that ScaleCare support engineers are on call to help you deal with your transition to a new HC3 cluster 24/7/365.  My thoughts are with you.

New and Improved! – Real-Time Per VM Statistics

When we designed HC3 clusters, we made them fault-tolerant and highly available so that you did not need to sit around all day staring at the HC3 web interface in case something went wrong.  We designed HC3 so you could rest easy knowing your workloads were on a reliable infrastructure that didn’t need a babysitter.  But still, when you need to manage your VM workloads on HC3, you need fast reliable data to make management decisions.  That’s why we have implemented some new statistics along with our new storage features.

If you haven’t already heard the news(click here),  we have integrated SSD flash storage into our already hyper-efficient software defined storage layer.  We knew this would make you even more curious about your per VM IOPS so we added that statistic both as a cluster wide statistic and a per VM statistic, refreshed continuously in real-time.  

Up until now you have been used to the at-a-glance monitoring of statistics for CPU utilization, RAM utilization, and storage utilization for the cluster and now you will see the cluster-wide IOPS statistic right alongside what you were already seeing.  For individual VMs, you are now going to see real-time statistic for both storage utilization and IOPS, right on the main web interface view.
Screenshot 2016-04-25 12.20.41

Why are we doing this now?  The new flash storage integration and automated tiering architecture allows you to tune the priority of flash utilization on the individual virtual disks in your VMs.  Monitoring the IOPS for each VM will help guide you as you tune the virtual disks for maximum performance.  You’ll not only see the benefits of the flash storage more clearly in the web interface but you will see the benefits of tuning specific workloads to make the best use of the flash storage in your cluster.

Take advantage of these new statistics when you update your HyperCore software and you’ll see the benefit of monitoring your storage utilization more granularly.  Talk to your ScaleCare support engineers to learn how to get this latest update.

Turning Hyperconvergence up to 11

People seem to be asking me a lot lately about incorporating flash into their storage architecture.  You probably already know that flash storage is still a lot more expensive than spinning disks.  You are probably not going to need flash I/O performance for all of your workloads nor do you need to pay for all flash storage systems. That is where hybrid storage comes in.

Hybrid storage solutions featuring a combination of solid state drives and spinning disks are not new to the market, but because of the cost per GB of flash compared to spinning disk, the adoption and accessibility for most workloads is low. Small and midsize business, in particular, may not know if implementing a hybrid storage solution is right for them.

Hyperconverged infrastructure also provides the best of both worlds in terms of combining virtualization with storage and compute resources. How hyperconvergence is defined as an architecture is still up for debate and you will see various implementations with more traditional storage, and those with truly integrated storage. Either way, hyperconverged infrastructure has begun making flash storage more ubiquitous throughout the datacenter and HC3 hyperconverged clustering from Scale Computing is now making it even more accessible with our HEAT technology.

HEAT is HyperCore Enhanced Automated Tiering, the latest addition to the HyperCore hyperconvergence architecture. HEAT combines intelligent I/O mapping with the redundant, wide-striping storage pool in HyperCore to provide high levels of I/O performance, redundancy, and resiliency across both spinning and solid state disks. Individual virtual disks in the storage pool can be tuned for relative flash prioritization to optimize the data workloads on those disks. The intelligent HEAT I/O mapping makes the most efficient use of flash storage for the virtual disk following the guidelines of the flash prioritization configured by the administrator on a scale of 0-11. You read that right. Our flash prioritization goes to 11.

Screenshot 2016-04-19 13.07.06

HyperCore gives you high performing storage on both spinning disk only or hybrid tiered storage because it is designed to let each virtual disk take advantage of the speed and capacity of the whole storage infrastructure. The more resources that are added to the clusters, the better the performance. HEAT takes that performance to the next level by giving you fine tuning options for not only every workload, but every virtual disk in your cluster. Oh, and I should have mentioned it comes at a lower prices than other hyperconverged solutions.

Watch this short video demo of HC3 HEAT:

If you still don’t know whether you need to start taking advantage of flash storage for your workloads, Scale Computing can help with free capacity planning tools to see if your I/O needs require flash or whether spinning disks still suffice under advanced, software-defined storage pooling. That is one of the advantages of a hyperconvergence solution like HC3; the guys at Scale Computing have already validated the infrastructure and provide the expertise and guidance you need.

New and Improved! – Snapshot Scheduling

Scale Computing is rolling out a new scheduling mechanism for HC3 VM snapshots and it is something to take note of.  Scheduling snapshots is nothing new but it is often a burdensome task to either create custom snapshot schedules for VMs or alternately be restricted by canned schedules that don’t really fit.  Luckily Scale created a scheduling mechanism that is extremely flexible and easy to use at the same time and I can honestly say I love this new feature.

The first thing I love about the new snapshot scheduling is that all schedules are template-based, meaning once you create a schedule, it can quickly and easily be applied to other VMs.  You don’t have to waste time recreating the schedule on each VM if one schedule will work for many VMs.  Just create the schedule once and apply at will.  The schedules are defined by both the snapshot intervals and the retention period for those scheduled snapshots.

The second thing I love about the snapshot scheduling in HC3 is that you can build the schedule with multiple simple recurrence rules.  This might sound like an unnecessary redundancy but what it provides is the ability to mix and match various rule formulas without making the rules overly complex.  You can add as many or as few rules to a schedule as needed to meet SLAs.

For example, you might want a snapshot every 2 hours for 24 hours and also a snapshot every day for 7 days.  Instead of mashing these together into a singularly confusing rule, they exist as two simple rules: A: Every 2 hours for 24 hours and B: Every 1 day for 7 days.  The granularity for the scheduling rules ranges from minutes to months to provide maximum flexibility when defining schedules to meet any needs.

What makes this scheduling feature even more useful is that it is directly tied into replication between clusters.  Replication between HC3 clusters is snapshot-based and snapshot schedules determine when data is replicated.  Replication can be scheduled as often as every 5 minutes or to whatever other schedule meets your disaster recovery SLAs for cluster-to-cluster failover.  This gives you nearly unlimited options for sizing replication traffic and frequency to meet your needs.

Watch this short video to see snapshot scheduling in action.

With the flexibility of this new scheduling mechanism, you will most likely be managing snapshots with just a short list of simple schedules that you can apply to your VMs quickly and quietly with no disruption to VM availability.  Snapshot scheduling is available now so check with ScaleCare support engineers for the update.

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×