All posts by David Paquette

It’s Not Easy Being Different

standing-out-from-crowd

Thinking outside the box. Paradigm shift. Innovation.  As tired as these words and phrases are, they are ideas that we still strive to embody in technology development. But what happens when persons or companies actually embrace these ideas in principle and create solutions that don’t fit into the industry molds?

At Scale Computing, we challenged the traditional thinking on infrastructure design and architecture by putting forward a solution based on the idea that IT infrastructure should be so simple that anyone could manage it, with any level of experience. This idea goes against decades of IT managers being put through weeks of training and certification to manage servers, storage, and most recently, virtualization. Our idea is that infrastructure should not need multiple control panels filled with nerd knobs that must be monitored ad nauseam, but that the expertise of IT administrators should be focused on applications and business processes.

Our solution, the HC3 virtualization platform, targets midmarket customers because that is where the complexity of infrastructure architecture hits administrators the hardest. In the larger enterprise IT departments, there can be multiple teams dedicated to the different silos of infrastructure maintenance and management but in the midmarket there can be as little as one administrator in charge of infrastructure. Our founders are former IT administrators who understand the challenges and pains of managing infrastructure.

Very simply, we want to eliminate complexity from infrastructure in IT where it causes the most disruption. Complexity adds a number of costs including  downtime due to failure, training and consulting fees for expertise, and extra labor costs for implementation, management, and maintenance. In order to maximize the benefit of simplicity for our target market, we specifically designed our solution without the extra complexity required by larger enterprise organizations.  The result is a simple hardware and virtualization platform that can be implemented with running VMs in under an hour, is fully redundant and resilient against hardware failures, virtually eliminates both planned and unplanned downtime, and can be scaled out quickly and easily. Basically, you get a nearly turnkey infrastructure that you rarely have to manage.

This whole idea seems very straightforward to us here at Scale Computing and our customers certainly seem to get it. As for the rest of the industry, including the analysts, our focus on the midmarket seems to be viewed as a liability. We purposefully do not have all of the features and complexity of solutions that target the larger enterprise customers. The HC3 platform does not scale out to the same level as other solutions that target the larger enterprise.  We do not support VMware with the specific goal of not burdening our customers with third party hypervisor licensing.  We include the hypervisor at no additional charge.

We are different for a reason, but to the analysts those differences do not line up with the checklists they have developed over the years of looking at enterprise solutions. The simplicity of the HC3 does not equate with analysts as visionary and forward thinking because of their enterprise focus when it comes to IT infrastructure. We’ll probably never find favor with analyst reviews unless we sell into a Fortune 500 customer, and that just isn’t on our horizon. Instead we’ll keep focusing on the solution we provide for the midmarket to simplify infrastructure, disaster recovery, VDI, and distributed enterprise.

Are we an underdog in our market?  Maybe.  But you could probably say the same about the companies we target with our solutions. They aren’t the industry giants, but rather the smaller guys who are driving local economic growth, who are nimble and innovative like we are, and who appreciate the value of a solution that actually solves the real problems that have plagued IT departments for decades.  It’s not easy being different but no one said starting an IT revolution would be easy.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101 – Part 3

This is my third and final post in this series. I’ve covered SAN and server virtualization and now I’d like to share my thoughts on the challenges of SMB IT shops vs enterprise IT.

To start, I should probably give some context on the size of an SMB IT shop. Since we are talking about infrastructure, I am really referring to IT departments that have less than a handful of administrators assigned to infrastructure, with the most common IT shop allocating only one or two resources to infrastructure. Since the makeup of businesses varies so much in terms of numbers of IT users vs. external services, etc, all of the lines do get a little blurred. It is not a perfect science but here’s hoping my points will be clear enough.

SMB

Small and medium businesses, sometimes referred to as small and midmarket, have some very unique challenges compared to larger enterprise customers. One of those challenges is being a jack of all trades, master of none.  Now, there are some very talented and dedicated administrators out there who can master many aspects of IT over time but often the day to day tasks of keeping the IT ship afloat make it impossible for administrators to gain expertise in any particular area. There just isn’t the budget nor training time to have enough expertise on staff. Without a large team of persons who bring together many types of expertise, administrators must make use of technology solutions that help them do more with less.

Complexity is the enemy of the small IT department during all phases of the solution lifecycle including implementation, management, and maintenance. Complex solutions that combine a number of different vendors and products can be more easily managed in the enterprise but become a burden on smaller IT shops that must stretch their limited knowledge and headcount. Projects then turn into long nights and weekends and administrators are still expected to manage normal business hour tasks. Some administrators use scripting to automate much of their IT management and end up with a highly customized environment that becomes hard to migrate away from when business needs evolve.

Then there is the issue of brain drain. Smaller IT shops cannot easily absorb the loss of key administrators who may be the only ones intimately familiar with how all of the systems interconnect and operate.  When those administrators leave for whatever reason, suddenly at times, they leave a huge gap in knowledge that cannot easily be filled.  This is much less of a problem in the enterprise where an individual administrator is one of a team and has many others who can fill in that gap.  The loss of a key administrator in the SMB can be devastating to the IT operations going forward.

To combat brain drain in the SMB, those IT shops benefit from fewer vendors and products to simplify the IT environment, requiring less specialized training and with the ability of a new administrator quickly coming up to speed on the technology in use.  High levels of automation built in to the vendor solution for common IT tasks and simple, unified management tools help the transition from one administrator to the next.

For SMB, budgets can vary wildly from shoestring on up.  The idea of doing more with less is much more on the minds of SMB administrators.  SMBs are not as resilient to unexpected costs associated with IT disasters and other types of unexpected downtime. Support is one of the first lines of insurance for SMBs and dealing with multiple vendors and support run-around can be paralyzing at those critical moments, especially for SMBs who could not budget for the higher levels of support.  Having resilient, reliable infrastructure with responsive, premium support can make a huge difference in protecting SMBs from various types of failure and disaster that could be critical to business success.

Ok, enough about the SMB, time to  discuss the big guys.

Enterprise

Both SMB and enterprise organizations have processes, although the level of reliance on process in much higher in the enterprise.  An SMB organization can typically adapt process easily and quickly to match technology, where an enterprise organization can be much more fixed in process and technology must be changed to match the process. The enterprise therefore employs a large number of administrators, developers, consultants, and other experts to create complex systems to support their business processes.

The enterprise can withstand more complexity because they are able to have more experts on staff who can focus management efforts on single silos of infrastructure such as storage, servers, virtualization, security, etc.  With multiple administrators assigned to each silo, there is guaranteed management coverage to deal with any unexpected problems.  Effectively, the IT department (or departments) in the enterprise have a high combined level of expertise and manpower, or have the budget to bring in outside consultants and service providers to fill these gaps as a standard practice.

Unlike with SMB, simplicity is not necessarily a benefit to the enterprise since they need the flexibility to adapt to business process.  Infrastructure can therefore be a patchwork of systems serving different needs from high performance computing, data warehousing, data distribution, disaster recovery, etc. Solutions for these enterprise operations must be extensible and adaptable to the user process to meet the compliance and business needs of these organizations.

Enterprise organizations are usually big enough that they can tolerate different types of failures better than SMB, although as we have seen in recent news, even companies like Delta Airlines are not immune to near catastrophic failures.  Still, disk failures or server failures that could bring an SMB to a standstill might barely cause a ripple in a large enterprise given the size of their operations.

Summary

The SMB benefits from infrastructure simplicity because it helps eliminate a number of challenges and unplanned costs.  For the enterprise, the focus is more on flexibility, adaptability, and extensibility where business processes reign supreme. IT challenges can be more acute in the SMB simply because the budgets and resources are more limited in both headcount and expertise. Complex infrastructure designed for the enterprise is not always going to translate into effective or viable solutions for SMB. Solution providers need to be aware that the SMB may need more than just a scaled down version of an enterprise solution.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101 – Part 2

I covered SAN technology in my last Infrastructure 101 post, so for today I’m going to cover server virtualization and maybe delve into containers and cloud.

Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let’s look at a bit of history.

Early on, computing was primarily done on mainframes, which were monolithic machines designed to provide all of the computing necessary for an organization. They were designed to share resources among various parallel processes to accommodate multiple users. As computing needs grew, organization began to move away from the monolithic architecture of the mainframe to hosting multiple physical servers that were less expensive and that would run one or more applications for multiple users. Physical servers could range in size and capacity from very large, rivaling mainframes, down to very small, resembling personal computers.  

While mainframes never disappeared completely, the flexibility in cost and capacity of physical servers made them an infrastructure of choice across all industries. Unfortunately, as computing needs continued to grow, organizations began needing more and more servers, and more administrators to manage the servers. The size of server rooms, along with the power and cooling needs were honestly becoming ridiculous.

There were a number of technologies that emerged resembling what we now call server virtualization that allowed the compute and storage resources of a single physical box to be divided among different virtualized servers, but those never became the mainstream. Virtualization didn’t really take off until hypervisor technology for the x86 platform came around, which happened at the same time as other platforms were declining in the server market.  

Initially, virtualization was not adopted for production servers but instead was used extensively for testing and development because it lacked some of the performance and stability needed for production servers. The widespread use for test and dev eventually led to improvements that made administrators confident with its use on production servers. The combination of performance improvements along with clustering to provide high availability for virtual machines open the door for widespread adoption for production servers.  

The transition to virtualization was dramatic, reducing server rooms that once housed dozens and dozens of server racks to only a handful of server racks for the host servers and storage on which all of the same workloads ran. It is now difficult to find an IT shop that is still using physical servers as their primary infrastructure.

While there were many hypervisors battling to become the de facto solution, a number of hypervisors were adopted including Xen and KVM (both open source), Hyper-V, and VMware ESX/ESXi which took the lion’s share of the market.  Those hypervisors or their derivatives continue to battle for marketshare today, after more than a decade. Cloud platforms have risen, built over each of these hypervisors, adding to the mystery of whether a de facto hypervisor will emerge.  But maybe it no longer matters.

Virtualization has now become a commodity technology. It may not seem so to VMware customers who are still weighing various licensing options, but server virtualization is pretty well baked and the innovations have shifted to hyperconvergence, cloud, and container technologies. The differences between hypervisors are few enough that the buying decisions are often based more on price and support than technology at this point.

This commoditization of server virtualization does not necessarily indicate any kind of decline in virtualization anytime soon, but rather a shift in thinking from traditional virtualization architectures. While cloud is driving innovation in multi-tenancy and self-service, hyperconvergence is fueling innovation in how hardware and storage can be designed and used more efficiently by virtual machines (as per my previous post about storage technologies).

IT departments are beginning to wonder if the baggage of training and management infrastructures for server virtualization are still a requirement or if, as a commodity, server virtualization should no longer be so complex. Is being a virtualization expert still a badge of honor or is it now a default expectation for IT administrators? And with hyperconvergence and cloud technologies simplifying virtual machine management, what level of expertise is really still required?

I think the main take away from the commoditization of server virtualization is that as you move to hyperconvergence and cloud platforms, you shouldn’t need to know what the underlying hypervisor is, nor should you care, and you definitely shouldn’t have to worry about licensing it separately.  They say you don’t understand something unless you can explain it to a 5 year old.  It is time for server virtualization to be easy enough that a 5 year old can provision virtual machines instead of requiring a full time, certified virtualization expert. Or maybe even a 4 year old.

Screenshot 2016-07-13 09.34.07

HEAT Up I/O with a Flash Retrofit

If your HC3 workloads need better performance and faster I/O, you can soon take advantage of flash storage without having to replace your existing cluster nodes. Scale Computing is rolling out a service to  help you retrofit your existing HC2000 /2100 or HC4000/4100 nodes with flash solid state drives (SSD) and update your HyperCore version to start using hybrid flash storage without any downtime. You can get the full benefits of HyperCore Enhanced Automated Tiering (HEAT) in HyperCore v7 when you retrofit with flash drives.

You can read more about HEAT technology in my blog post Turning Hyperconvergence to 11

Now, before you start ordering your new SSD drives for flash storage retrofit, let’s talk about the new storage architecture designed to include flash. You may already be wondering how much flash storage you need and how it can be divided among the workloads that need it, or even how it will affect your future plans to scale out with more HC3 nodes.

The HC3 storage system uses wide striping across all nodes in the cluster to provide maximum performance and availability in the form of redundancy across nodes.  With all spinning disks, any disk was a candidate for redundant writes from other nodes.  With the addition of flash, redundancy is intelligently segregated between flash and spinning disk storage to maximize flash performance.   

A write to a spinning disk will be redundantly written to a spinning disk on another node, and a write to an SSD will be redundantly written to an SSD on another node. Therefore, just as you need at least three nodes of storage and compute resources in an HC3 cluster, you need to a minimum of three nodes with SSD drives to take advantage of flash storage.  

Consider also, with retrofitting, that you will be replacing an existing spinning disk drive with the new SSD. The new SSD may be of different capacity that the disk it is replacing which might affect your overall storage pool capacity. You may already in a position to add overall capacity where larger SSD drives are the right fit or adding an additional flash storage node along with the retrofit is the right choice.  You can get to the three node minimum of SSD nodes by any combination of retrofitting or adding new SSD tiered nodes to the cluster.

Retrofitting existing clusters is being provided as a service which will include our Scale Computing experts helping you assess your storage needs to determine the best plan for you to incorporate flash into your existing HC3 cluster. Whether you have a small, medium, or large cluster implementation, we will assist you in both planning and implementation to avoid any downtime or disruption.

However you decide to retrofit and implement flash storage in your HC3 cluster, you will immediately begin seeing the benefits as new data is written to high performing flash and high I/O blocks from spinning disk are intelligently moved to flash storage for better performance. Furthermore, you have full control of how SSD is used on a per virtual disk basis. You’ll be able to adjust the level of SSD usage on a sliding scale to take advantage of both flash and spinning disk storage where you need each most. It’s the flash storage solution you’ve been waiting for.

Don’t hesitate to contact your Scale Computing representatives to ask for more information on HC3 flash storage today.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101

As a back to school theme, I thought I’d share my thoughts on infrastructure over a series of posts.  Today’s topic is SAN.

Storage Area Networking (SAN) is a technology that solved a real problem that existed a couple decades ago. SANs have been a foundational piece of IT infrastructure architecture for a long time and have helped drive major innovations in storage.  But how relevant are SANs today in the age of software-defined datacenters? Let’s talk about how we have arrived at modern storage architecture.

First, disk arrays were created to house more storage than could fit into a single server chassis.  Storage needs were outpacing the capacity of individual disks and the limited disk slots available in servers.  But adding more disk to a single server led to another issue, available storage capacity was trapped within each server.  If Server A needed more storage and Server B had a surplus, the only way to redistribute was to physically remove a disk from Server B and add it to Server A.  This was not always so easy because it might be breaking up a RAID configuration or there simply might not be the controller capacity for the disk on Server A.  It usually meant ending up with a lot of over-provisioned storage, ballooning the budget.

SANs solved this problem by making a pool of storage accessible to servers across a network. It was revolutionary because it allowed LUNs to be created and assigned more or less at will to servers across the network. The network was fibre channel in the beginning because ethernet LAN speeds were not quite up to snuff for disk I/O. It was expensive and you needed fibre channel cards in each server you needed connected to the SAN, but it still changed the way storage was planned in datacenters.

Alongside SAN, you had Network Attached Storage (NAS) which had even more flexibility than SAN but lacked the full storage protocol capabilities of SAN or Direct Attached Storage.  Still, NAS rose as a file sharing solution alongside SAN because it was less expensive and used ethernet.

The next major innovation was iSCSI which originally debuted before it’s time. The iSCSI protocol allowed SANs to be used over standard ethernet connections. Unfortunately the ethernet networks took a little longer to become fast enough for iSCSI to take off but eventually it started to replace fibre channel networks for SAN as 1Gb and 10Gb networks became accessible. WIth iSCSi, SANs became even more accessible to all IT shops.

The next hurdle for SAN technology was the self-inflicted. The problem was that now an administrator might be managing 2 or more SANs on top of NAS and server-side Direct Attached Storage (DAS), and these different components did not play well together necessarily. There were so many SANs and NAS vendors that used proprietary protocols and management tools that it was once again a burden on IT.  Then along came virtualization.

The next innovation was virtual SAN technology. There were two virtualization paths that affected SANs. One path was trying to solve the storage management problem I had just mentioned, and the other path was trying to virtualize the SAN within hypervisors for server virtualization. These paths eventually crossed as virtualization became the standard.

Virtual SAN technology initially grew from outside SAN, not within, because SAN was big business and virtual SAN technology threatened traditional SAN.  When approaching server virtualization, though, virtualizing storage was a do or die imperative for SAN vendors. Outside of SAN vendors, software solutions were seeing the possibility with iSCSI protocols to place a layer of virtualization over SAN, NAS, and DAS and create a single, virtual pool of storage. This was a huge step forward in accessibility of storage but it came at a cost of both having to purchase the virtual SAN technology on top of the existing SAN infrastructure, and at a cost of efficiency because it effectively added another, or in some cases, multiple more layers of I/O management and protocols to what already existed.

When SANs (and NAS) were integrated into server virtualization, it was primarily done with Virtual Storage Appliances that were virtual servers running the virtual SAN software on top of the underlying SAN architecture.  With at least one of these VSAs per virtual host, the virtual SAN architecture was consuming a lot of compute resources in the virtual infrastructure.

So virtual SANs were a mess. If it hadn’t been for faster CPUs with more cores, cheaper RAM, and flash storage, virtual SANs would have been a non-starter based on I/O efficiency. Virtual SANs seemed to be the way things were going but what about that inefficiency?  We are now seeing some interesting advances in software-defined storage that provide the same types of storage pooling as virtual SANs but without all of the layers of protocol and I/O management that make it so inefficient.

With DAS, servers have direct access to the hardware layer of the storage, providing the most efficient I/O path outside of raw storage access.  The direct attached methodology can and is being used in storage pooling by some storage technologies like HC3 from Scale Computing. All of the baggage that virtual SANs brought from traditional SAN architecture and the multiple layers of protocol and management they added don’t need to exist in a software-defined storage architecture that doesn’t rely on old SAN technology.  

SAN was once a brilliant solution to a real problem and had a good run of innovation and enabling the early stages of server virtualization. However, SAN is not the storage technology of the future and with the rise of hyperconvergence and cloud technologies, SAN is probably seeing its sunset on the horizon.

Screenshot 2016-07-13 09.34.07

Don’t Double Down on Infrastructure – Scale Out as Needed

There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends. These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.

There are a number of reasons why IT departments may need to scale out. Hopefully it is because of growth of the business which usually coincides with increased budgets.  It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity. It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.

The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking. It could be storage, CPU, RAM, networking, and any level of caching or bussing in between. More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is. Then they implement the new infrastructure to just have to go through the same process a few years down the line. Very costly. Very inefficient.

Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences. Maybe you want to swap out disks in the SAN for faster, larger disks. Can the storage controllers handle the increased speed and capacity? What about the network? Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.

Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure.  With a clustered, appliance-based architecture, capacity can be added very quickly. For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.

HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a “node”, of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster. The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.

This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now. You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future. The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out.  Hyperconverged Infrastructure is the solution.

Screenshot 2016-07-13 09.34.07

7 Reasons Why I Work at Scale Computing

I came from a background in software that has spanned software testing, systems engineering, product marketing, and product management, and this year my career journey brought me to Scale Computing as the Product Marketing Manager. During the few months I have been with Scale, I’ve been amazed by the hard work and innovation embodied in this organization.  Here are some of the reasons I joined Scale and why I love working here.

1 – Our Founding Mission

Our founders are former IT administrators who understand the challenges faced by IT departments with limited budgets and staff. They wanted to reinvent IT infrastructure to solve those challenges and get IT focused on applications. That’s why they helped coin the term “hyperconverged infrastructure”.

2 – Focus on the Administrator

Our product family, HC3, was always designed to address the needs of datacenters managed by as few as one administrator by combining features and efficiency achieved in enterprise solutions for any budget.  HC3 scales from small to enterprise because its roots are planted in the needs of the individual administrator focused on keeping applications available.

3 – Second to None Support

I have a firm belief that good support is the cornerstone of successful IT solutions. Our world class support not only includes hardware replacement but 24/7/365 phone support from only qualified experts.  We don’t offer any other level of support because we believe every customer, no matter their size or budget, deserves the same level of support.

4 – 1500+ Customers, 5500+ Installs

Starting in 2008 and bringing HC3 to market in 2012, we’ve sold to customers in nearly every industry from manufacturing, education, government, healthcare, finance, hotel/restaurant, and more.  Customer success is our driving force. Our solution is driving that success.

5 – Innovative Technology

We designed the HC3 solution from the ground up.  Starting with the strengths of open source KVM virtualization, we developed our own operating system called HyperCore which includes our own block access, direct attached storage system with SSD tiering for maximum storage efficiency. We believe that if it is worth doing then it is worth doing the right way.

6 – Simplicity, Scalability, and Availability

These core ideas keep us focused on reducing costs and management when it comes to deployment, software and firmware updates, capacity scaling, and minimizing planned and unplanned downtime.  I believe in our goal to minimize the cost and management footprint of infrastructure to free up resources for application management and service delivery in IT.

7 – Disaster Recovery, VDI, and Distributed Enterprise

HC3 is more than just a simple infrastructure solution. It is an infrastructure platform that supports multiple use cases including disaster recovery sites, virtual desktop infrastructure, and remote office and branch office infrastructure. I love that the flexibility of HC3 allows it to be used in nearly every type of industry.

Scale Computing is more than just an employer; it is a new passion for me. I hope you keep following my blog posts to learn more about the awesome things we are doing here at Scale and I hope we can help you bring your datacenter into the new hyperconvergence era. If you have any questions or feedback about my blog posts, hyperconvergence, or Scale Computing, you can contact me at dpaquette@scalecomputing.com.

Screenshot 2016-07-13 09.34.07

Hyperconvergence for the Distributed Enterprise

IT departments face a variety of challenges but maybe none as challenging as managing multiple sites. Many organizations must provide IT services across dozens or even hundreds of small remote offices or facilities. One of the most common organizational structures for these distributed enterprises is a single large central datacenter where IT staff are located supporting multiple remote offices where personnel have little or no IT expertise.

These remote sites often need the same variety of application services and data services needed in the central office, but on a smaller scale. To run these applications, these sites need multiple servers, storage solutions, and disaster recovery. There is no IT staff on site so remote management is ideal to cut down on the productivity cost of sending IT staff to remote sites frequently to troubleshoot issues. This is where the turn key appliance approach of hyperconvergence shines.

A hyperconverged infrastructure solution combines server, storage, and virtualization software into a single appliance that can be clustered for scalability and high availability. It eliminates the complexity of having disparate server hardware, storage hardware, and virtualization software from multiple vendors and having to try to replicate the complexity of that piecemeal solution at every site.  Hyperconverged infrastructure provides a simple repeatable infrastructure out of the box.  This approach makes it easy to scale out infrastructure at sites on demand from a single vendor.

At Scale Computing, we offer the HC3 solution that truly combines server, storage, virtualization, and even disaster recovery and high availability. We provide a large range of hardware configurations to support very small implementations all the way up to full enterprise datacenter infrastructure. Also, because any of these various node configurations can be mixed and matched with other nodes, you can scale the infrastructure at a site with extra capacity and/or compute power as you need very quickly.

HC3 management is all web-based so sites can easily be managed remotely. From provisioning new virtual machines to opening consoles for each VM for simple and direct management from the central datacenter, it’s all in the web browser. There is even a reverse SSH tunnel available for ScaleCare support to provide additional remote management of lower level software features in the hypervisor and storage system. Redundant hardware components and self healing mean that hardware failures can be absorbed while applications remain available until IT staff or local staff can replace hardware components.  

With HC3, replication is built in to provide disaster recovery and high availability back to the central datacenter in the event of entire site failure. Virtual machines and applications can be back up and running within minutes to allow remote connectivity from the remote site as needed. You can achieve both simplified infrastructure and remote high availability in a single solution from a single vendor. One back to pat or one throat to choke, as they say.

If you want to learn more about how hyperconvergence can make distributed enterprise simpler and easier, talk to one of our hyperconvergence experts.

Screenshot 2016-07-13 09.34.07


4 Hidden Infrastructure Costs for the SMB

Infrastructure complexity is not unique to enterprise datacenters. Just because a business or organization is small does not mean it is exempt from the feature needs of big enterprise datacenters. Small and mid-size organizations require fault tolerance, high availability, mobility, and flexibility as much as anyone. Unfortunately, the complexity of traditional datacenter and virtualization architecture hit the SMB the hardest. Here are 4 of the hidden costs that can cripple the SMB IT budget.

1 – Training and Expertise

Setting up a standard virtualization infrastructure can be complex; it requires virtualization, networking, and storage expertise. In the larger enterprises, expertise is often spread out across dozens of admins through new hire, formal training, or consulting. However, in the SMB data center with only a handful or even only one admin and limited budgets, expertise can be harder to come by. Self-led training and research can take costly hours out of every week and admins may only have time to achieve a minimum level of expertise to maintain an infrastructure without the ability to optimize it.  Lack of expertise affects infrastructure performance and stability, not allowing for the most return on infrastructure investment.

2 – Support Run-Around

A standard virtualization infrastructure has components from a number of different vendors including the storage vendor, server vendor, and hypervisor vendor to name just the basics.  Problems arising in the infrastructure are not always easy to diagnose and with multiple vendors and vendor support centers in the mix, this can lead to a lot of finger pointing.  Admins can spend hours if not days calling various support engineers from different vendors to pinpoint the issue.  Long troubleshooting times can correspond to long outages and lost productivity because of vendor support run-around.

3 – Admin Burn-Out

The complexity of standard virtualization environments containing multiple vendor solutions and multiple layers of hardware and software mean longer nights and weekends performing maintenance tasks such as firmware updates, refreshing hardware, adding capacity, and dealing with outages caused by non-optimized architecture. Not to mention, admins of the complex architectures cannot detach long enough to enjoy personal time off because of the risk of outage.  Administrators who have to spend long nights and weekends dealing with infrastructure issues are not as productive in daily tasks and have less energy and focus for initiatives to improve process and performance.

4 – Brain Drain

Small IT shops are particularly susceptible to brain drain. The knowledge of all of the complex hardware configurations and application requirements is concentrated in a very small group, in some cases one administrator. While those individuals are around, there is no problem but when one leaves for whatever reason, there is a huge gap in knowledge which might never be replaced. There can be huge costs involved in rebuilding the knowledge or redesigning systems to match the expertise of the remaining or replacement staff.

Although complexity has hidden costs for all small, medium, and enterprise datacenters, the complexity designed for the enterprise and inherited down into the SMB makes those costs more acute. When choosing an infrastructure solution for a small or mid-size datacenter, it is important to weigh these hidden costs against the cost of investing in solutions that offer automation and management that mitigate the need for expertise, support run-around, and after hours administration. Modern hyperconverged infrastructures like HC3 from Scale Computing offer simplicity, availability, and scalability to eliminate hidden infrastructure costs.

Screenshot 2016-07-13 09.34.07

Flash: The Right Way at the Right Price

As much as I wish I could, I’m not going to go into detail on how flash is implemented in HC3 because, frankly, the I/O heat mapping we use to move data between flash SDD and spinning HDD tiers is highly intelligent and probably more complex than what I can fit in a reasonable blog post. However, I will tell you why the way we implement flash is the right way and how we are able to offer it in an affordable way.

(Don’t worry, you can read about how our flash is implemented in detail in our Theory of Operations by clicking here.)

First, we are implementing flash into the simplicity of our cluster-wide storage pool so that the tasks of deploying a cluster, a cluster node, or creating a VM are just as simple as always. The real difference you will notice will be the performance improvement. You will see the benefits of our flash storage even if you didn’t know it was there. Our storage architecture already provided the benefit of direct block access to physical storage from each VM without inefficient protocol and our flash implementation uses this same architecture.

Second, we are not implementing flash storage as a cache like other solutions.  Many solutions required flash as a storage cache to make up for the deficiencies of their inefficient storage architectures and I/O pathing. With HC3, flash is implemented as a storage tier within the storage pool and adds to the overall storage capacity. We created our own enhanced, automated tiering technology to manage the data across both SSD and HDD tiers to retain the simplicity of the storage pool with the high performance of flash for the hottest blocks.

Finally, we are implementing flash with the most affordable high performing SSD hardware we can find in our already affordable HC3 cluster nodes. Our focus on the SMB market makes us hypersensitive to the budget needs of small and midsize datacenters and it is our commitment to provide the best products possible for your budgets. This focus on SMB is why we are not just slapping together solutions from multiple vendors into a chassis and calling it hyperconvergence but instead we have developed our own operating system, our own storage system, and our own management interface because small datacenters deserve solutions designed specifically for their needs.

Hopefully, I have helped you understand just how we are able to announce our HC1150 cluster starting at $24,500* for 3 nodes, delivering world class hyperconvergence with the simplicity of single server management and the high performance of hybrid flash storage. It wasn’t easy but we believe in doing it the right way for SMB.

Click here for the official press release.

*After discounts from qualified partners.

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×