All posts by David Paquette

Don’t Double Down on Infrastructure – Scale Out as Needed

There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends. These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.

There are a number of reasons why IT departments may need to scale out. Hopefully it is because of growth of the business which usually coincides with increased budgets.  It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity. It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.

The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking. It could be storage, CPU, RAM, networking, and any level of caching or bussing in between. More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is. Then they implement the new infrastructure to just have to go through the same process a few years down the line. Very costly. Very inefficient.

Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences. Maybe you want to swap out disks in the SAN for faster, larger disks. Can the storage controllers handle the increased speed and capacity? What about the network? Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.

Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure.  With a clustered, appliance-based architecture, capacity can be added very quickly. For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.

HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a “node”, of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster. The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.

This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now. You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future. The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out.  Hyperconverged Infrastructure is the solution.

Screenshot 2016-07-13 09.34.07

7 Reasons Why I Work at Scale Computing

I came from a background in software that has spanned software testing, systems engineering, product marketing, and product management, and this year my career journey brought me to Scale Computing as the Product Marketing Manager. During the few months I have been with Scale, I’ve been amazed by the hard work and innovation embodied in this organization.  Here are some of the reasons I joined Scale and why I love working here.

1 – Our Founding Mission

Our founders are former IT administrators who understand the challenges faced by IT departments with limited budgets and staff. They wanted to reinvent IT infrastructure to solve those challenges and get IT focused on applications. That’s why they helped coin the term “hyperconverged infrastructure”.

2 – Focus on the Administrator

Our product family, HC3, was always designed to address the needs of datacenters managed by as few as one administrator by combining features and efficiency achieved in enterprise solutions for any budget.  HC3 scales from small to enterprise because its roots are planted in the needs of the individual administrator focused on keeping applications available.

3 – Second to None Support

I have a firm belief that good support is the cornerstone of successful IT solutions. Our world class support not only includes hardware replacement but 24/7/365 phone support from only qualified experts.  We don’t offer any other level of support because we believe every customer, no matter their size or budget, deserves the same level of support.

4 – 1500+ Customers, 5500+ Installs

Starting in 2008 and bringing HC3 to market in 2012, we’ve sold to customers in nearly every industry from manufacturing, education, government, healthcare, finance, hotel/restaurant, and more.  Customer success is our driving force. Our solution is driving that success.

5 – Innovative Technology

We designed the HC3 solution from the ground up.  Starting with the strengths of open source KVM virtualization, we developed our own operating system called HyperCore which includes our own block access, direct attached storage system with SSD tiering for maximum storage efficiency. We believe that if it is worth doing then it is worth doing the right way.

6 – Simplicity, Scalability, and Availability

These core ideas keep us focused on reducing costs and management when it comes to deployment, software and firmware updates, capacity scaling, and minimizing planned and unplanned downtime.  I believe in our goal to minimize the cost and management footprint of infrastructure to free up resources for application management and service delivery in IT.

7 – Disaster Recovery, VDI, and Distributed Enterprise

HC3 is more than just a simple infrastructure solution. It is an infrastructure platform that supports multiple use cases including disaster recovery sites, virtual desktop infrastructure, and remote office and branch office infrastructure. I love that the flexibility of HC3 allows it to be used in nearly every type of industry.

Scale Computing is more than just an employer; it is a new passion for me. I hope you keep following my blog posts to learn more about the awesome things we are doing here at Scale and I hope we can help you bring your datacenter into the new hyperconvergence era. If you have any questions or feedback about my blog posts, hyperconvergence, or Scale Computing, you can contact me at

Screenshot 2016-07-13 09.34.07

Hyperconvergence for the Distributed Enterprise

IT departments face a variety of challenges but maybe none as challenging as managing multiple sites. Many organizations must provide IT services across dozens or even hundreds of small remote offices or facilities. One of the most common organizational structures for these distributed enterprises is a single large central datacenter where IT staff are located supporting multiple remote offices where personnel have little or no IT expertise.

These remote sites often need the same variety of application services and data services needed in the central office, but on a smaller scale. To run these applications, these sites need multiple servers, storage solutions, and disaster recovery. There is no IT staff on site so remote management is ideal to cut down on the productivity cost of sending IT staff to remote sites frequently to troubleshoot issues. This is where the turn key appliance approach of hyperconvergence shines.

A hyperconverged infrastructure solution combines server, storage, and virtualization software into a single appliance that can be clustered for scalability and high availability. It eliminates the complexity of having disparate server hardware, storage hardware, and virtualization software from multiple vendors and having to try to replicate the complexity of that piecemeal solution at every site.  Hyperconverged infrastructure provides a simple repeatable infrastructure out of the box.  This approach makes it easy to scale out infrastructure at sites on demand from a single vendor.

At Scale Computing, we offer the HC3 solution that truly combines server, storage, virtualization, and even disaster recovery and high availability. We provide a large range of hardware configurations to support very small implementations all the way up to full enterprise datacenter infrastructure. Also, because any of these various node configurations can be mixed and matched with other nodes, you can scale the infrastructure at a site with extra capacity and/or compute power as you need very quickly.

HC3 management is all web-based so sites can easily be managed remotely. From provisioning new virtual machines to opening consoles for each VM for simple and direct management from the central datacenter, it’s all in the web browser. There is even a reverse SSH tunnel available for ScaleCare support to provide additional remote management of lower level software features in the hypervisor and storage system. Redundant hardware components and self healing mean that hardware failures can be absorbed while applications remain available until IT staff or local staff can replace hardware components.  

With HC3, replication is built in to provide disaster recovery and high availability back to the central datacenter in the event of entire site failure. Virtual machines and applications can be back up and running within minutes to allow remote connectivity from the remote site as needed. You can achieve both simplified infrastructure and remote high availability in a single solution from a single vendor. One back to pat or one throat to choke, as they say.

If you want to learn more about how hyperconvergence can make distributed enterprise simpler and easier, talk to one of our hyperconvergence experts.

Screenshot 2016-07-13 09.34.07

4 Hidden Infrastructure Costs for the SMB

Infrastructure complexity is not unique to enterprise datacenters. Just because a business or organization is small does not mean it is exempt from the feature needs of big enterprise datacenters. Small and mid-size organizations require fault tolerance, high availability, mobility, and flexibility as much as anyone. Unfortunately, the complexity of traditional datacenter and virtualization architecture hit the SMB the hardest. Here are 4 of the hidden costs that can cripple the SMB IT budget.

1 – Training and Expertise

Setting up a standard virtualization infrastructure can be complex; it requires virtualization, networking, and storage expertise. In the larger enterprises, expertise is often spread out across dozens of admins through new hire, formal training, or consulting. However, in the SMB data center with only a handful or even only one admin and limited budgets, expertise can be harder to come by. Self-led training and research can take costly hours out of every week and admins may only have time to achieve a minimum level of expertise to maintain an infrastructure without the ability to optimize it.  Lack of expertise affects infrastructure performance and stability, not allowing for the most return on infrastructure investment.

2 – Support Run-Around

A standard virtualization infrastructure has components from a number of different vendors including the storage vendor, server vendor, and hypervisor vendor to name just the basics.  Problems arising in the infrastructure are not always easy to diagnose and with multiple vendors and vendor support centers in the mix, this can lead to a lot of finger pointing.  Admins can spend hours if not days calling various support engineers from different vendors to pinpoint the issue.  Long troubleshooting times can correspond to long outages and lost productivity because of vendor support run-around.

3 – Admin Burn-Out

The complexity of standard virtualization environments containing multiple vendor solutions and multiple layers of hardware and software mean longer nights and weekends performing maintenance tasks such as firmware updates, refreshing hardware, adding capacity, and dealing with outages caused by non-optimized architecture. Not to mention, admins of the complex architectures cannot detach long enough to enjoy personal time off because of the risk of outage.  Administrators who have to spend long nights and weekends dealing with infrastructure issues are not as productive in daily tasks and have less energy and focus for initiatives to improve process and performance.

4 – Brain Drain

Small IT shops are particularly susceptible to brain drain. The knowledge of all of the complex hardware configurations and application requirements is concentrated in a very small group, in some cases one administrator. While those individuals are around, there is no problem but when one leaves for whatever reason, there is a huge gap in knowledge which might never be replaced. There can be huge costs involved in rebuilding the knowledge or redesigning systems to match the expertise of the remaining or replacement staff.

Although complexity has hidden costs for all small, medium, and enterprise datacenters, the complexity designed for the enterprise and inherited down into the SMB makes those costs more acute. When choosing an infrastructure solution for a small or mid-size datacenter, it is important to weigh these hidden costs against the cost of investing in solutions that offer automation and management that mitigate the need for expertise, support run-around, and after hours administration. Modern hyperconverged infrastructures like HC3 from Scale Computing offer simplicity, availability, and scalability to eliminate hidden infrastructure costs.

Screenshot 2016-07-13 09.34.07

Flash: The Right Way at the Right Price

As much as I wish I could, I’m not going to go into detail on how flash is implemented in HC3 because, frankly, the I/O heat mapping we use to move data between flash SDD and spinning HDD tiers is highly intelligent and probably more complex than what I can fit in a reasonable blog post. However, I will tell you why the way we implement flash is the right way and how we are able to offer it in an affordable way.

(Don’t worry, you can read about how our flash is implemented in detail in our Theory of Operations by clicking here.)

First, we are implementing flash into the simplicity of our cluster-wide storage pool so that the tasks of deploying a cluster, a cluster node, or creating a VM are just as simple as always. The real difference you will notice will be the performance improvement. You will see the benefits of our flash storage even if you didn’t know it was there. Our storage architecture already provided the benefit of direct block access to physical storage from each VM without inefficient protocol and our flash implementation uses this same architecture.

Second, we are not implementing flash storage as a cache like other solutions.  Many solutions required flash as a storage cache to make up for the deficiencies of their inefficient storage architectures and I/O pathing. With HC3, flash is implemented as a storage tier within the storage pool and adds to the overall storage capacity. We created our own enhanced, automated tiering technology to manage the data across both SSD and HDD tiers to retain the simplicity of the storage pool with the high performance of flash for the hottest blocks.

Finally, we are implementing flash with the most affordable high performing SSD hardware we can find in our already affordable HC3 cluster nodes. Our focus on the SMB market makes us hypersensitive to the budget needs of small and midsize datacenters and it is our commitment to provide the best products possible for your budgets. This focus on SMB is why we are not just slapping together solutions from multiple vendors into a chassis and calling it hyperconvergence but instead we have developed our own operating system, our own storage system, and our own management interface because small datacenters deserve solutions designed specifically for their needs.

Hopefully, I have helped you understand just how we are able to announce our HC1150 cluster starting at $24,500* for 3 nodes, delivering world class hyperconvergence with the simplicity of single server management and the high performance of hybrid flash storage. It wasn’t easy but we believe in doing it the right way for SMB.

Click here for the official press release.

*After discounts from qualified partners.

Disaster Recovery Made Easy… as a Service!

You probably already know about the built-in VM-level replication in your HC3 cluster, and you may have already weighed some options on deploying a cluster for disaster recovery (DR). It is my pleasure to announce a new option: ScaleCare Remote Recovery Service!

What is Remote Recovery Service and why should you care? Well, simply put, it is secure remote replication to a secure datacenter for failover and failback when you need it. You don’t need a co-lo, a second cluster, or to install software agents. You only need your HC3 cluster, some bandwidth, and the ability to create a VPN to use this service.

This service is being hosted in a secure SAEE-16 SOC 2 certified and PCI compliant datacenter and is available at a low monthly cost to protect your critical workloads from potential disaster. Once you have the proper VPN and bandwidth squared away, setting up replication could almost not be easier. You simply have to add in the network information for the remote HC3 cluster at LightBound and a few clicks later you are replicating.  HyperCore adds an additional layer of SSH encryption to secure your data across your VPN.

Screenshot 2016-04-22 11.55.58

I should also mention that you can customize your replication schedule with granularity ranging from every 5 minutes to every hour, day week, or even month. You can combine schedule rules to make it as simple or complex as you need to meet your SLAs. Choose RPO of 5 minutes and failover within minutes if you need it or any other model that meets your needs. Not only are you replicating the VM but all the snapshots so you have all your point-in-time recovery options after failover. Did I mention you will get a complete DR runbook to help plan your entire DR process?

We know DR is important to you and your customers both internal and external. In fact, it could be the difference between the life and death of your business or organization. Keep your workloads protected with a service that is designed to specifically for HC3 customers and HC3 workloads.

Remote Recovery Service is not free but it starts as low as $100/month per VM. Contact Scale to find out how you can fit DR into your budget without having to build out and manage your own DR site.

4 Things You Lose with Scale Computing HC3

Choosing to convert to hyperconvergence is a big decision and it is important to carefully consider the implications.  For a small or midsize datacenter, these considerations are even more critical.  Here are 4 important things that you lose when switching to Scale Computing HC3 hyperconvergence.

1.  Management Consoles

When you implement an HC3 cluster, you no longer have multiple consoles to manage separate server, storage, and virtualization solutions. You are reduced to a single console from which to manage the infrastructure and perform all virtualization tasks, and only one view to see all cluster nodes, VMs, and storage and compute resources. Only one console! Can you even imagine not having to manage storage subsystems in a separate console to make the whole thing work? (Note: You may also begin losing vendor specific knowledge of storage subsystems as all storage is managed as a single storage pool alongside the hypervisor.)

2. Nights and Weekends in the Datacenter

Those many nights and weekends you’ve become accustomed to working, spent performing firmware, software, or even hardware updates to your infrastructure, will be lost. You don’t have to take workloads offline with HC3 to perform infrastructure updates so you will just do these during regular hours. No more endless cups of coffee along with the whir of cooling fans to keep you awake on those late nights in the server rooms. Your relationship with the nightly cleaning staff at the office will undoubtedly suffer unless you can find application layer projects to replace the nights and weekends you used to spend on infrastructure.

3. Hypervisor Licensing

You’ll no doubt feel this loss even during the evaluation and purchasing of a new HC3 cluster.  There just isn’t any hypervisor licensing to be found because the entire hypervisor stack is included without any 3rd party licensing required.  There are no license keys, nor licensing details, nor licensing prices or options.  The hypervisor is just there.  Some of the other hyperconvergence vendors provide hypervisor licensing but it just won’t be found at Scale Computing.

4. Support Engineers

You’ve spent many hours developing close relationships with a circle of support engineers from your various server, storage, and hypervisor vendors over months and years but those relationships simply can’t continue.  No, you will only be contacting Scale Computing for all of your server, storage, virtualization, and even DR needs.  You’ll no doubt miss the many calls and hours of finger pointing spent with your former vendor support engineers to troubleshoot even the simplest issues.

Change invariably leads to loss.  We all deal with loss of datacenter complexity in our own way, but be assured that ScaleCare support engineers are on call to help you deal with your transition to a new HC3 cluster 24/7/365.  My thoughts are with you.

New and Improved! – Real-Time Per VM Statistics

When we designed HC3 clusters, we made them fault-tolerant and highly available so that you did not need to sit around all day staring at the HC3 web interface in case something went wrong.  We designed HC3 so you could rest easy knowing your workloads were on a reliable infrastructure that didn’t need a babysitter.  But still, when you need to manage your VM workloads on HC3, you need fast reliable data to make management decisions.  That’s why we have implemented some new statistics along with our new storage features.

If you haven’t already heard the news(click here),  we have integrated SSD flash storage into our already hyper-efficient software defined storage layer.  We knew this would make you even more curious about your per VM IOPS so we added that statistic both as a cluster wide statistic and a per VM statistic, refreshed continuously in real-time.  

Up until now you have been used to the at-a-glance monitoring of statistics for CPU utilization, RAM utilization, and storage utilization for the cluster and now you will see the cluster-wide IOPS statistic right alongside what you were already seeing.  For individual VMs, you are now going to see real-time statistic for both storage utilization and IOPS, right on the main web interface view.
Screenshot 2016-04-25 12.20.41

Why are we doing this now?  The new flash storage integration and automated tiering architecture allows you to tune the priority of flash utilization on the individual virtual disks in your VMs.  Monitoring the IOPS for each VM will help guide you as you tune the virtual disks for maximum performance.  You’ll not only see the benefits of the flash storage more clearly in the web interface but you will see the benefits of tuning specific workloads to make the best use of the flash storage in your cluster.

Take advantage of these new statistics when you update your HyperCore software and you’ll see the benefit of monitoring your storage utilization more granularly.  Talk to your ScaleCare support engineers to learn how to get this latest update.

Turning Hyperconvergence up to 11

People seem to be asking me a lot lately about incorporating flash into their storage architecture.  You probably already know that flash storage is still a lot more expensive than spinning disks.  You are probably not going to need flash I/O performance for all of your workloads nor do you need to pay for all flash storage systems. That is where hybrid storage comes in.

Hybrid storage solutions featuring a combination of solid state drives and spinning disks are not new to the market, but because of the cost per GB of flash compared to spinning disk, the adoption and accessibility for most workloads is low. Small and midsize business, in particular, may not know if implementing a hybrid storage solution is right for them.

Hyperconverged infrastructure also provides the best of both worlds in terms of combining virtualization with storage and compute resources. How hyperconvergence is defined as an architecture is still up for debate and you will see various implementations with more traditional storage, and those with truly integrated storage. Either way, hyperconverged infrastructure has begun making flash storage more ubiquitous throughout the datacenter and HC3 hyperconverged clustering from Scale Computing is now making it even more accessible with our HEAT technology.

HEAT is HyperCore Enhanced Automated Tiering, the latest addition to the HyperCore hyperconvergence architecture. HEAT combines intelligent I/O mapping with the redundant, wide-striping storage pool in HyperCore to provide high levels of I/O performance, redundancy, and resiliency across both spinning and solid state disks. Individual virtual disks in the storage pool can be tuned for relative flash prioritization to optimize the data workloads on those disks. The intelligent HEAT I/O mapping makes the most efficient use of flash storage for the virtual disk following the guidelines of the flash prioritization configured by the administrator on a scale of 0-11. You read that right. Our flash prioritization goes to 11.

Screenshot 2016-04-19 13.07.06

HyperCore gives you high performing storage on both spinning disk only or hybrid tiered storage because it is designed to let each virtual disk take advantage of the speed and capacity of the whole storage infrastructure. The more resources that are added to the clusters, the better the performance. HEAT takes that performance to the next level by giving you fine tuning options for not only every workload, but every virtual disk in your cluster. Oh, and I should have mentioned it comes at a lower prices than other hyperconverged solutions.

Watch this short video demo of HC3 HEAT:

If you still don’t know whether you need to start taking advantage of flash storage for your workloads, Scale Computing can help with free capacity planning tools to see if your I/O needs require flash or whether spinning disks still suffice under advanced, software-defined storage pooling. That is one of the advantages of a hyperconvergence solution like HC3; the guys at Scale Computing have already validated the infrastructure and provide the expertise and guidance you need.

New and Improved! – Snapshot Scheduling

Scale Computing is rolling out a new scheduling mechanism for HC3 VM snapshots and it is something to take note of.  Scheduling snapshots is nothing new but it is often a burdensome task to either create custom snapshot schedules for VMs or alternately be restricted by canned schedules that don’t really fit.  Luckily Scale created a scheduling mechanism that is extremely flexible and easy to use at the same time and I can honestly say I love this new feature.

The first thing I love about the new snapshot scheduling is that all schedules are template-based, meaning once you create a schedule, it can quickly and easily be applied to other VMs.  You don’t have to waste time recreating the schedule on each VM if one schedule will work for many VMs.  Just create the schedule once and apply at will.  The schedules are defined by both the snapshot intervals and the retention period for those scheduled snapshots.

The second thing I love about the snapshot scheduling in HC3 is that you can build the schedule with multiple simple recurrence rules.  This might sound like an unnecessary redundancy but what it provides is the ability to mix and match various rule formulas without making the rules overly complex.  You can add as many or as few rules to a schedule as needed to meet SLAs.

For example, you might want a snapshot every 2 hours for 24 hours and also a snapshot every day for 7 days.  Instead of mashing these together into a singularly confusing rule, they exist as two simple rules: A: Every 2 hours for 24 hours and B: Every 1 day for 7 days.  The granularity for the scheduling rules ranges from minutes to months to provide maximum flexibility when defining schedules to meet any needs.

What makes this scheduling feature even more useful is that it is directly tied into replication between clusters.  Replication between HC3 clusters is snapshot-based and snapshot schedules determine when data is replicated.  Replication can be scheduled as often as every 5 minutes or to whatever other schedule meets your disaster recovery SLAs for cluster-to-cluster failover.  This gives you nearly unlimited options for sizing replication traffic and frequency to meet your needs.

Watch this short video to see snapshot scheduling in action.

With the flexibility of this new scheduling mechanism, you will most likely be managing snapshots with just a short list of simple schedules that you can apply to your VMs quickly and quietly with no disruption to VM availability.  Snapshot scheduling is available now so check with ScaleCare support engineers for the update.

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.