×
×

My Experience at MES16

I recently had the pleasure of working my first trade show with Scale Computing at the Midsize Enterprise Summit (MES) in Austin, TX. I’ve worked and attended many trade shows in the past but I was unsure of what to expect because A) new company and coworkers and B) MES has a boardroom format I hadn’t seen before. Let me give you a preview of my summary: It was amazing.

As a premiere sponsor of the event, we had the opportunity to present our solution to all 13 of the boardrooms at the show. MES is a show that Scale Computing has attended regularly for years because of our midmarket focus. As we went from boardroom to boardroom, wheeling our live, running HC3 cluster, we encountered a great mix of attendees from ardent fans and friends, familiar faces, and new faces.

If you’ve been following along with Scale Computing, you know we’ve had a big year in terms of product releases and partnerships. We’ve rolled out, among other things, hybrid storage with flash tiering, DRaaS, VDI with Workspot, and we were able to announce a new partnership at MES with Information Builders for Scale Analytics. We were fortunate enough to have Ty Wang from Workspot with us to promote our joint VDI solution.

Cszvp96WYAEBJyX

As Jeff Ready, our founder and CEO, presented in each boardroom, it was clear that the managers of midmarket IT were understanding our message. There was a definite sense that, like us, our peers working in IT administration were seeing the need for simple infrastrastructure that delivered solutions like virtualization, VDI, DR, analytics, as an alternative to traditional virtualization and vendors like VMware.

In the evenings, when the attendees were able to visit our booth, it was encouraging to hear from so many IT directors and managers that they’re fed up with the problems that our HC3 solution was solving and that our prices that we displayed in our booth were exceeding their expectations. It is really a testament to our entire team that our product and message seemed to resonate so strongly.

I will also note that there was another vendor, whom I will not name, at the show who offers what they call a hyperconverged infrastructure solution. That vendor really brought their “A” game with a much higher level of sponsorship than Scale Computing. This being my first show, I expected us to be overshadowed by their efforts. I couldn’t have been more wrong. When the attendee voting was tallied at the awards ceremony, we walked away with three awards including Best of Show.

Scenes from the 2016 Midsize Enterprise Summit

It was only one amazing trade show in the grand scheme of things, but it has really cemented in my mind that Scale Computing is changing IT for the midmarket with simplicity, scalability, and availability at the forefront of thought.

Screenshot 2016-07-13 09.34.07

Cloud Computing vs. Hyperconvergence

As IT departments look to move beyond traditional virtualization into cloud and hyperconverged infrastructure (HCI) platforms, they have a lot to consider. There are many types of organizations with different IT needs and it is important to determine whether those needs align more cloud or HCI. Before I dig into the differences, let me go over the similarities.

Both cloud and HCI tend to offer a similar user experience highlighted by ease of use and simplicity. One of the key features of both is simplifying the creation of VMs by automatically managing the pools of resources.  With cloud, the infrastructure is all but transparent as the actual physical host where the VM is running is far removed from the user. With live migration capabilities and auto provisioning of resources, HCI can provide nearly the same experience.

As for storage, software defined storage pooling has made storage management practically as transparent in HCI as it is in cloud.  In many ways, HCI is nearly a private cloud, but without the complexity of traditional underlying virtualization architecture, HCI makes  infrastructure management turnkey and lets administrators focus on the workloads and applications, just like the cloud, but keeps everything on prem and not managed by a third party.

Still, there are definite differences between cloud and HCI so let’s get to those. I like to approach these with a series of questions to help guide between cloud and on prem HCI.

Is your business seasonal?

  • If your business is seasonal, the pay as you go Opex pricing model of cloud might make more sense as well as the bursting ability of cloud.  If you need lots of computing power but only during short periods of the year, cloud might be best.  If you business follows a more typical schedule of steady business throughout the year with some seasonal bumps, then an on prem Capex investment in HCI might be the best option.

Do you already have IT staff?

  • If you already have IT staff managing an existing infrastructure that you are looking to replace, an HCI solution will be both easy to implement and will allow your existing staff to change focus from infrastructure management to implementing better applications, services, and processes. If you are currently unstaffed for IT, cloud might be the way to go since you can get a number of cloud based application services for users with very little IT administration needed.  You may need some resources to help make a variety of these services work together for your business, but it will likely be less than with an on prem solution.

Do you need to meet regulatory compliance on data?

  • If so, you are going to need to look into the implications of your data and services hosted and managed off site by a third party. You will be reliant on the cloud provider to provide the necessary security levels that meet compliance. With HCI, you have complete control and can implement any level of security because the solution is on prem.

Do you favor Capex or Opex?

  • Pretty simple here. Cloud is Opex. HCI can be Capex and is usually available for Opex as well through leasing options.  The cloud Opex is going to be less predictable because many of the costs are based on dynamic usage, where the Opex with HCI should be completely predictable with a monthly leasing fee. Considering further, the Opex for HCI is usually in the form of lease-to-own so it drops off dramatically once the lease period ends as opposed to cloud Opex which is perpetual.

Can you rely on your internet connection?

  • Cloud is 100% dependent on internet connectivity so if your internet connection is down, all of your cloud computing is unavailable. The internet connection becomes a single point of failure for cloud. With HCI, internet connection will not affect local access to applications and services.

Do you trust third party services?

  • If something goes wrong with cloud, you are dependent on the cloud provider to correct the issue. What if your small or medium sized cloud provider suddenly goes out of business? Whatever happens, you are helpless, waiting, like an airline passenger waiting on the tarmac for a last minute repair. With HCI, the solution is under your control and you can take action to get systems back online.

Let me condense these into a little cheat sheet for you.

Question Cloud HCI
Is your business seasonal? Yes No
Do you have IT staff? No Yes
Do you need to meet regulatory compliance on data? No Yes
Do you favor Capex or Opex? Opex Capex/Opex
Can you rely on your internet connection? Yes No
Do you trust third party services? Yes No

One last consideration that I don’t like to put into the question category is the ability to escape the cloud if it doesn’t work out. Why don’t I like to make it a question? Maybe I just haven’t found the right way to ask it without making cloud sound like some kind of death trap for your data, and I’m not trying to throw cloud under the bus here. Cloud is a good solution where it fits. That being said, it is still a valid consideration.

Most cloud providers have great onboarding services to get your data to the cloud more efficiently but they don’t have any equivalent to move you off.  It is not in their best interest. Dragging all of your data back out of the cloud over your internet connection is not a project anyone would look forward to. If all of your critical data resides in the cloud, it might take a while to get it back on prem. With HCI it is already on prem so you can do whatever you like with it at local network speeds.

I hope that helps those who have been considering a choice between cloud and HCI for their IT infrastructure. Until next time.

Screenshot 2016-07-13 09.34.07

It’s Not Easy Being Different

standing-out-from-crowd

Thinking outside the box. Paradigm shift. Innovation.  As tired as these words and phrases are, they are ideas that we still strive to embody in technology development. But what happens when persons or companies actually embrace these ideas in principle and create solutions that don’t fit into the industry molds?

At Scale Computing, we challenged the traditional thinking on infrastructure design and architecture by putting forward a solution based on the idea that IT infrastructure should be so simple that anyone could manage it, with any level of experience. This idea goes against decades of IT managers being put through weeks of training and certification to manage servers, storage, and most recently, virtualization. Our idea is that infrastructure should not need multiple control panels filled with nerd knobs that must be monitored ad nauseam, but that the expertise of IT administrators should be focused on applications and business processes.

Our solution, the HC3 virtualization platform, targets midmarket customers because that is where the complexity of infrastructure architecture hits administrators the hardest. In the larger enterprise IT departments, there can be multiple teams dedicated to the different silos of infrastructure maintenance and management but in the midmarket there can be as little as one administrator in charge of infrastructure. Our founders are former IT administrators who understand the challenges and pains of managing infrastructure.

Very simply, we want to eliminate complexity from infrastructure in IT where it causes the most disruption. Complexity adds a number of costs including  downtime due to failure, training and consulting fees for expertise, and extra labor costs for implementation, management, and maintenance. In order to maximize the benefit of simplicity for our target market, we specifically designed our solution without the extra complexity required by larger enterprise organizations.  The result is a simple hardware and virtualization platform that can be implemented with running VMs in under an hour, is fully redundant and resilient against hardware failures, virtually eliminates both planned and unplanned downtime, and can be scaled out quickly and easily. Basically, you get a nearly turnkey infrastructure that you rarely have to manage.

This whole idea seems very straightforward to us here at Scale Computing and our customers certainly seem to get it. As for the rest of the industry, including the analysts, our focus on the midmarket seems to be viewed as a liability. We purposefully do not have all of the features and complexity of solutions that target the larger enterprise customers. The HC3 platform does not scale out to the same level as other solutions that target the larger enterprise.  We do not support VMware with the specific goal of not burdening our customers with third party hypervisor licensing.  We include the hypervisor at no additional charge.

We are different for a reason, but to the analysts those differences do not line up with the checklists they have developed over the years of looking at enterprise solutions. The simplicity of the HC3 does not equate with analysts as visionary and forward thinking because of their enterprise focus when it comes to IT infrastructure. We’ll probably never find favor with analyst reviews unless we sell into a Fortune 500 customer, and that just isn’t on our horizon. Instead we’ll keep focusing on the solution we provide for the midmarket to simplify infrastructure, disaster recovery, VDI, and distributed enterprise.

Are we an underdog in our market?  Maybe.  But you could probably say the same about the companies we target with our solutions. They aren’t the industry giants, but rather the smaller guys who are driving local economic growth, who are nimble and innovative like we are, and who appreciate the value of a solution that actually solves the real problems that have plagued IT departments for decades.  It’s not easy being different but no one said starting an IT revolution would be easy.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101 – Part 3

This is my third and final post in this series. I’ve covered SAN and server virtualization and now I’d like to share my thoughts on the challenges of SMB IT shops vs enterprise IT.

To start, I should probably give some context on the size of an SMB IT shop. Since we are talking about infrastructure, I am really referring to IT departments that have less than a handful of administrators assigned to infrastructure, with the most common IT shop allocating only one or two resources to infrastructure. Since the makeup of businesses varies so much in terms of numbers of IT users vs. external services, etc, all of the lines do get a little blurred. It is not a perfect science but here’s hoping my points will be clear enough.

SMB

Small and medium businesses, sometimes referred to as small and midmarket, have some very unique challenges compared to larger enterprise customers. One of those challenges is being a jack of all trades, master of none.  Now, there are some very talented and dedicated administrators out there who can master many aspects of IT over time but often the day to day tasks of keeping the IT ship afloat make it impossible for administrators to gain expertise in any particular area. There just isn’t the budget nor training time to have enough expertise on staff. Without a large team of persons who bring together many types of expertise, administrators must make use of technology solutions that help them do more with less.

Complexity is the enemy of the small IT department during all phases of the solution lifecycle including implementation, management, and maintenance. Complex solutions that combine a number of different vendors and products can be more easily managed in the enterprise but become a burden on smaller IT shops that must stretch their limited knowledge and headcount. Projects then turn into long nights and weekends and administrators are still expected to manage normal business hour tasks. Some administrators use scripting to automate much of their IT management and end up with a highly customized environment that becomes hard to migrate away from when business needs evolve.

Then there is the issue of brain drain. Smaller IT shops cannot easily absorb the loss of key administrators who may be the only ones intimately familiar with how all of the systems interconnect and operate.  When those administrators leave for whatever reason, suddenly at times, they leave a huge gap in knowledge that cannot easily be filled.  This is much less of a problem in the enterprise where an individual administrator is one of a team and has many others who can fill in that gap.  The loss of a key administrator in the SMB can be devastating to the IT operations going forward.

To combat brain drain in the SMB, those IT shops benefit from fewer vendors and products to simplify the IT environment, requiring less specialized training and with the ability of a new administrator quickly coming up to speed on the technology in use.  High levels of automation built in to the vendor solution for common IT tasks and simple, unified management tools help the transition from one administrator to the next.

For SMB, budgets can vary wildly from shoestring on up.  The idea of doing more with less is much more on the minds of SMB administrators.  SMBs are not as resilient to unexpected costs associated with IT disasters and other types of unexpected downtime. Support is one of the first lines of insurance for SMBs and dealing with multiple vendors and support run-around can be paralyzing at those critical moments, especially for SMBs who could not budget for the higher levels of support.  Having resilient, reliable infrastructure with responsive, premium support can make a huge difference in protecting SMBs from various types of failure and disaster that could be critical to business success.

Ok, enough about the SMB, time to  discuss the big guys.

Enterprise

Both SMB and enterprise organizations have processes, although the level of reliance on process in much higher in the enterprise.  An SMB organization can typically adapt process easily and quickly to match technology, where an enterprise organization can be much more fixed in process and technology must be changed to match the process. The enterprise therefore employs a large number of administrators, developers, consultants, and other experts to create complex systems to support their business processes.

The enterprise can withstand more complexity because they are able to have more experts on staff who can focus management efforts on single silos of infrastructure such as storage, servers, virtualization, security, etc.  With multiple administrators assigned to each silo, there is guaranteed management coverage to deal with any unexpected problems.  Effectively, the IT department (or departments) in the enterprise have a high combined level of expertise and manpower, or have the budget to bring in outside consultants and service providers to fill these gaps as a standard practice.

Unlike with SMB, simplicity is not necessarily a benefit to the enterprise since they need the flexibility to adapt to business process.  Infrastructure can therefore be a patchwork of systems serving different needs from high performance computing, data warehousing, data distribution, disaster recovery, etc. Solutions for these enterprise operations must be extensible and adaptable to the user process to meet the compliance and business needs of these organizations.

Enterprise organizations are usually big enough that they can tolerate different types of failures better than SMB, although as we have seen in recent news, even companies like Delta Airlines are not immune to near catastrophic failures.  Still, disk failures or server failures that could bring an SMB to a standstill might barely cause a ripple in a large enterprise given the size of their operations.

Summary

The SMB benefits from infrastructure simplicity because it helps eliminate a number of challenges and unplanned costs.  For the enterprise, the focus is more on flexibility, adaptability, and extensibility where business processes reign supreme. IT challenges can be more acute in the SMB simply because the budgets and resources are more limited in both headcount and expertise. Complex infrastructure designed for the enterprise is not always going to translate into effective or viable solutions for SMB. Solution providers need to be aware that the SMB may need more than just a scaled down version of an enterprise solution.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101 – Part 2

I covered SAN technology in my last Infrastructure 101 post, so for today I’m going to cover server virtualization and maybe delve into containers and cloud.

Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let’s look at a bit of history.

Early on, computing was primarily done on mainframes, which were monolithic machines designed to provide all of the computing necessary for an organization. They were designed to share resources among various parallel processes to accommodate multiple users. As computing needs grew, organization began to move away from the monolithic architecture of the mainframe to hosting multiple physical servers that were less expensive and that would run one or more applications for multiple users. Physical servers could range in size and capacity from very large, rivaling mainframes, down to very small, resembling personal computers.  

While mainframes never disappeared completely, the flexibility in cost and capacity of physical servers made them an infrastructure of choice across all industries. Unfortunately, as computing needs continued to grow, organizations began needing more and more servers, and more administrators to manage the servers. The size of server rooms, along with the power and cooling needs were honestly becoming ridiculous.

There were a number of technologies that emerged resembling what we now call server virtualization that allowed the compute and storage resources of a single physical box to be divided among different virtualized servers, but those never became the mainstream. Virtualization didn’t really take off until hypervisor technology for the x86 platform came around, which happened at the same time as other platforms were declining in the server market.  

Initially, virtualization was not adopted for production servers but instead was used extensively for testing and development because it lacked some of the performance and stability needed for production servers. The widespread use for test and dev eventually led to improvements that made administrators confident with its use on production servers. The combination of performance improvements along with clustering to provide high availability for virtual machines open the door for widespread adoption for production servers.  

The transition to virtualization was dramatic, reducing server rooms that once housed dozens and dozens of server racks to only a handful of server racks for the host servers and storage on which all of the same workloads ran. It is now difficult to find an IT shop that is still using physical servers as their primary infrastructure.

While there were many hypervisors battling to become the de facto solution, a number of hypervisors were adopted including Xen and KVM (both open source), Hyper-V, and VMware ESX/ESXi which took the lion’s share of the market.  Those hypervisors or their derivatives continue to battle for marketshare today, after more than a decade. Cloud platforms have risen, built over each of these hypervisors, adding to the mystery of whether a de facto hypervisor will emerge.  But maybe it no longer matters.

Virtualization has now become a commodity technology. It may not seem so to VMware customers who are still weighing various licensing options, but server virtualization is pretty well baked and the innovations have shifted to hyperconvergence, cloud, and container technologies. The differences between hypervisors are few enough that the buying decisions are often based more on price and support than technology at this point.

This commoditization of server virtualization does not necessarily indicate any kind of decline in virtualization anytime soon, but rather a shift in thinking from traditional virtualization architectures. While cloud is driving innovation in multi-tenancy and self-service, hyperconvergence is fueling innovation in how hardware and storage can be designed and used more efficiently by virtual machines (as per my previous post about storage technologies).

IT departments are beginning to wonder if the baggage of training and management infrastructures for server virtualization are still a requirement or if, as a commodity, server virtualization should no longer be so complex. Is being a virtualization expert still a badge of honor or is it now a default expectation for IT administrators? And with hyperconvergence and cloud technologies simplifying virtual machine management, what level of expertise is really still required?

I think the main take away from the commoditization of server virtualization is that as you move to hyperconvergence and cloud platforms, you shouldn’t need to know what the underlying hypervisor is, nor should you care, and you definitely shouldn’t have to worry about licensing it separately.  They say you don’t understand something unless you can explain it to a 5 year old.  It is time for server virtualization to be easy enough that a 5 year old can provision virtual machines instead of requiring a full time, certified virtualization expert. Or maybe even a 4 year old.

Screenshot 2016-07-13 09.34.07

HEAT Up I/O with a Flash Retrofit

If your HC3 workloads need better performance and faster I/O, you can soon take advantage of flash storage without having to replace your existing cluster nodes. Scale Computing is rolling out a service to  help you retrofit your existing HC2000 /2100 or HC4000/4100 nodes with flash solid state drives (SSD) and update your HyperCore version to start using hybrid flash storage without any downtime. You can get the full benefits of HyperCore Enhanced Automated Tiering (HEAT) in HyperCore v7 when you retrofit with flash drives.

You can read more about HEAT technology in my blog post Turning Hyperconvergence to 11

Now, before you start ordering your new SSD drives for flash storage retrofit, let’s talk about the new storage architecture designed to include flash. You may already be wondering how much flash storage you need and how it can be divided among the workloads that need it, or even how it will affect your future plans to scale out with more HC3 nodes.

The HC3 storage system uses wide striping across all nodes in the cluster to provide maximum performance and availability in the form of redundancy across nodes.  With all spinning disks, any disk was a candidate for redundant writes from other nodes.  With the addition of flash, redundancy is intelligently segregated between flash and spinning disk storage to maximize flash performance.   

A write to a spinning disk will be redundantly written to a spinning disk on another node, and a write to an SSD will be redundantly written to an SSD on another node. Therefore, just as you need at least three nodes of storage and compute resources in an HC3 cluster, you need to a minimum of three nodes with SSD drives to take advantage of flash storage.  

Consider also, with retrofitting, that you will be replacing an existing spinning disk drive with the new SSD. The new SSD may be of different capacity that the disk it is replacing which might affect your overall storage pool capacity. You may already in a position to add overall capacity where larger SSD drives are the right fit or adding an additional flash storage node along with the retrofit is the right choice.  You can get to the three node minimum of SSD nodes by any combination of retrofitting or adding new SSD tiered nodes to the cluster.

Retrofitting existing clusters is being provided as a service which will include our Scale Computing experts helping you assess your storage needs to determine the best plan for you to incorporate flash into your existing HC3 cluster. Whether you have a small, medium, or large cluster implementation, we will assist you in both planning and implementation to avoid any downtime or disruption.

However you decide to retrofit and implement flash storage in your HC3 cluster, you will immediately begin seeing the benefits as new data is written to high performing flash and high I/O blocks from spinning disk are intelligently moved to flash storage for better performance. Furthermore, you have full control of how SSD is used on a per virtual disk basis. You’ll be able to adjust the level of SSD usage on a sliding scale to take advantage of both flash and spinning disk storage where you need each most. It’s the flash storage solution you’ve been waiting for.

Don’t hesitate to contact your Scale Computing representatives to ask for more information on HC3 flash storage today.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101

As a back to school theme, I thought I’d share my thoughts on infrastructure over a series of posts.  Today’s topic is SAN.

Storage Area Networking (SAN) is a technology that solved a real problem that existed a couple decades ago. SANs have been a foundational piece of IT infrastructure architecture for a long time and have helped drive major innovations in storage.  But how relevant are SANs today in the age of software-defined datacenters? Let’s talk about how we have arrived at modern storage architecture.

First, disk arrays were created to house more storage than could fit into a single server chassis.  Storage needs were outpacing the capacity of individual disks and the limited disk slots available in servers.  But adding more disk to a single server led to another issue, available storage capacity was trapped within each server.  If Server A needed more storage and Server B had a surplus, the only way to redistribute was to physically remove a disk from Server B and add it to Server A.  This was not always so easy because it might be breaking up a RAID configuration or there simply might not be the controller capacity for the disk on Server A.  It usually meant ending up with a lot of over-provisioned storage, ballooning the budget.

SANs solved this problem by making a pool of storage accessible to servers across a network. It was revolutionary because it allowed LUNs to be created and assigned more or less at will to servers across the network. The network was fibre channel in the beginning because ethernet LAN speeds were not quite up to snuff for disk I/O. It was expensive and you needed fibre channel cards in each server you needed connected to the SAN, but it still changed the way storage was planned in datacenters.

Alongside SAN, you had Network Attached Storage (NAS) which had even more flexibility than SAN but lacked the full storage protocol capabilities of SAN or Direct Attached Storage.  Still, NAS rose as a file sharing solution alongside SAN because it was less expensive and used ethernet.

The next major innovation was iSCSI which originally debuted before it’s time. The iSCSI protocol allowed SANs to be used over standard ethernet connections. Unfortunately the ethernet networks took a little longer to become fast enough for iSCSI to take off but eventually it started to replace fibre channel networks for SAN as 1Gb and 10Gb networks became accessible. WIth iSCSi, SANs became even more accessible to all IT shops.

The next hurdle for SAN technology was the self-inflicted. The problem was that now an administrator might be managing 2 or more SANs on top of NAS and server-side Direct Attached Storage (DAS), and these different components did not play well together necessarily. There were so many SANs and NAS vendors that used proprietary protocols and management tools that it was once again a burden on IT.  Then along came virtualization.

The next innovation was virtual SAN technology. There were two virtualization paths that affected SANs. One path was trying to solve the storage management problem I had just mentioned, and the other path was trying to virtualize the SAN within hypervisors for server virtualization. These paths eventually crossed as virtualization became the standard.

Virtual SAN technology initially grew from outside SAN, not within, because SAN was big business and virtual SAN technology threatened traditional SAN.  When approaching server virtualization, though, virtualizing storage was a do or die imperative for SAN vendors. Outside of SAN vendors, software solutions were seeing the possibility with iSCSI protocols to place a layer of virtualization over SAN, NAS, and DAS and create a single, virtual pool of storage. This was a huge step forward in accessibility of storage but it came at a cost of both having to purchase the virtual SAN technology on top of the existing SAN infrastructure, and at a cost of efficiency because it effectively added another, or in some cases, multiple more layers of I/O management and protocols to what already existed.

When SANs (and NAS) were integrated into server virtualization, it was primarily done with Virtual Storage Appliances that were virtual servers running the virtual SAN software on top of the underlying SAN architecture.  With at least one of these VSAs per virtual host, the virtual SAN architecture was consuming a lot of compute resources in the virtual infrastructure.

So virtual SANs were a mess. If it hadn’t been for faster CPUs with more cores, cheaper RAM, and flash storage, virtual SANs would have been a non-starter based on I/O efficiency. Virtual SANs seemed to be the way things were going but what about that inefficiency?  We are now seeing some interesting advances in software-defined storage that provide the same types of storage pooling as virtual SANs but without all of the layers of protocol and I/O management that make it so inefficient.

With DAS, servers have direct access to the hardware layer of the storage, providing the most efficient I/O path outside of raw storage access.  The direct attached methodology can and is being used in storage pooling by some storage technologies like HC3 from Scale Computing. All of the baggage that virtual SANs brought from traditional SAN architecture and the multiple layers of protocol and management they added don’t need to exist in a software-defined storage architecture that doesn’t rely on old SAN technology.  

SAN was once a brilliant solution to a real problem and had a good run of innovation and enabling the early stages of server virtualization. However, SAN is not the storage technology of the future and with the rise of hyperconvergence and cloud technologies, SAN is probably seeing its sunset on the horizon.

Screenshot 2016-07-13 09.34.07

Don’t Double Down on Infrastructure – Scale Out as Needed

There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends. These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.

There are a number of reasons why IT departments may need to scale out. Hopefully it is because of growth of the business which usually coincides with increased budgets.  It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity. It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.

The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking. It could be storage, CPU, RAM, networking, and any level of caching or bussing in between. More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is. Then they implement the new infrastructure to just have to go through the same process a few years down the line. Very costly. Very inefficient.

Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences. Maybe you want to swap out disks in the SAN for faster, larger disks. Can the storage controllers handle the increased speed and capacity? What about the network? Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.

Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure.  With a clustered, appliance-based architecture, capacity can be added very quickly. For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.

HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a “node”, of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster. The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.

This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now. You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future. The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out.  Hyperconverged Infrastructure is the solution.

Screenshot 2016-07-13 09.34.07

7 Reasons Why I Work at Scale Computing

I came from a background in software that has spanned software testing, systems engineering, product marketing, and product management, and this year my career journey brought me to Scale Computing as the Product Marketing Manager. During the few months I have been with Scale, I’ve been amazed by the hard work and innovation embodied in this organization.  Here are some of the reasons I joined Scale and why I love working here.

1 – Our Founding Mission

Our founders are former IT administrators who understand the challenges faced by IT departments with limited budgets and staff. They wanted to reinvent IT infrastructure to solve those challenges and get IT focused on applications. That’s why they helped coin the term “hyperconverged infrastructure”.

2 – Focus on the Administrator

Our product family, HC3, was always designed to address the needs of datacenters managed by as few as one administrator by combining features and efficiency achieved in enterprise solutions for any budget.  HC3 scales from small to enterprise because its roots are planted in the needs of the individual administrator focused on keeping applications available.

3 – Second to None Support

I have a firm belief that good support is the cornerstone of successful IT solutions. Our world class support not only includes hardware replacement but 24/7/365 phone support from only qualified experts.  We don’t offer any other level of support because we believe every customer, no matter their size or budget, deserves the same level of support.

4 – 1500+ Customers, 5500+ Installs

Starting in 2008 and bringing HC3 to market in 2012, we’ve sold to customers in nearly every industry from manufacturing, education, government, healthcare, finance, hotel/restaurant, and more.  Customer success is our driving force. Our solution is driving that success.

5 – Innovative Technology

We designed the HC3 solution from the ground up.  Starting with the strengths of open source KVM virtualization, we developed our own operating system called HyperCore which includes our own block access, direct attached storage system with SSD tiering for maximum storage efficiency. We believe that if it is worth doing then it is worth doing the right way.

6 – Simplicity, Scalability, and Availability

These core ideas keep us focused on reducing costs and management when it comes to deployment, software and firmware updates, capacity scaling, and minimizing planned and unplanned downtime.  I believe in our goal to minimize the cost and management footprint of infrastructure to free up resources for application management and service delivery in IT.

7 – Disaster Recovery, VDI, and Distributed Enterprise

HC3 is more than just a simple infrastructure solution. It is an infrastructure platform that supports multiple use cases including disaster recovery sites, virtual desktop infrastructure, and remote office and branch office infrastructure. I love that the flexibility of HC3 allows it to be used in nearly every type of industry.

Scale Computing is more than just an employer; it is a new passion for me. I hope you keep following my blog posts to learn more about the awesome things we are doing here at Scale and I hope we can help you bring your datacenter into the new hyperconvergence era. If you have any questions or feedback about my blog posts, hyperconvergence, or Scale Computing, you can contact me at dpaquette@scalecomputing.com.

Screenshot 2016-07-13 09.34.07

Hyperconvergence for the Distributed Enterprise

IT departments face a variety of challenges but maybe none as challenging as managing multiple sites. Many organizations must provide IT services across dozens or even hundreds of small remote offices or facilities. One of the most common organizational structures for these distributed enterprises is a single large central datacenter where IT staff are located supporting multiple remote offices where personnel have little or no IT expertise.

These remote sites often need the same variety of application services and data services needed in the central office, but on a smaller scale. To run these applications, these sites need multiple servers, storage solutions, and disaster recovery. There is no IT staff on site so remote management is ideal to cut down on the productivity cost of sending IT staff to remote sites frequently to troubleshoot issues. This is where the turn key appliance approach of hyperconvergence shines.

A hyperconverged infrastructure solution combines server, storage, and virtualization software into a single appliance that can be clustered for scalability and high availability. It eliminates the complexity of having disparate server hardware, storage hardware, and virtualization software from multiple vendors and having to try to replicate the complexity of that piecemeal solution at every site.  Hyperconverged infrastructure provides a simple repeatable infrastructure out of the box.  This approach makes it easy to scale out infrastructure at sites on demand from a single vendor.

At Scale Computing, we offer the HC3 solution that truly combines server, storage, virtualization, and even disaster recovery and high availability. We provide a large range of hardware configurations to support very small implementations all the way up to full enterprise datacenter infrastructure. Also, because any of these various node configurations can be mixed and matched with other nodes, you can scale the infrastructure at a site with extra capacity and/or compute power as you need very quickly.

HC3 management is all web-based so sites can easily be managed remotely. From provisioning new virtual machines to opening consoles for each VM for simple and direct management from the central datacenter, it’s all in the web browser. There is even a reverse SSH tunnel available for ScaleCare support to provide additional remote management of lower level software features in the hypervisor and storage system. Redundant hardware components and self healing mean that hardware failures can be absorbed while applications remain available until IT staff or local staff can replace hardware components.  

With HC3, replication is built in to provide disaster recovery and high availability back to the central datacenter in the event of entire site failure. Virtual machines and applications can be back up and running within minutes to allow remote connectivity from the remote site as needed. You can achieve both simplified infrastructure and remote high availability in a single solution from a single vendor. One back to pat or one throat to choke, as they say.

If you want to learn more about how hyperconvergence can make distributed enterprise simpler and easier, talk to one of our hyperconvergence experts.

Screenshot 2016-07-13 09.34.07