What do DDOS Attacks Mean for Cloud Users?

Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.


This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime.  What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

Screenshot 2016-07-13 09.34.07

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

Scale Computing Keeps Storage Simple and Efficient

Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.

At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex.  In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?

Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.

Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs.  These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.

VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads.  To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.

The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability.  This pooling also allows for complete flexibility of storage usage across all nodes.  The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.

To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O.  By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.

We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest.  We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.

Screenshot 2016-04-19 13.07.06

The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.

Screenshot 2016-07-13 09.34.07

VDI with Workspot

One of the questions we often get for our HC3 platform is, “Can it be used for virtual desktop infrastructure (VDI)?” Yes, of course, it can. In addition to solutions we support like RDS or Citrix, we are very excited about our partnership with Workspot and their VDI 2.0 solution.  But first, I want to explain a bit about why we think VDI on HC3 makes so much sense.

VDI greatly benefits from simplicity in infrastructure. The idea behind VDI is to reduce both infrastructure management and cost by moving workloads from the front end to the bank end infrastructure. This makes it much easier to control resource utilization and manage images. HC3 provides that simple infrastructure that from box to running VMs takes less than an hour. Also, the entire firmware and software including hypervisor can be updated or scaled out with additional capacity without downtime. Your desktops will never be as highly available as on HC3. Simple, scalable, and available are the ideas HC3 is built on.

So why Workspot on HC3?  Workspot brought together some of the original creators of VDI to reinvent it as a next generation solution.The CTO of Workspot is one of the founding engineers to code the VMware View VDI product! What makes it innovative though? By leveraging cloud management infrastructure, Workspot simplifies VDI management for the IT generalist while supporting BYOD for the modern workplace. Workspot on HC3 can be deployed in under an hour, making it possible to deploy a full VDI solution in less than a day.


We did validation testing with Workspot on HC3and were able to run 175 desktop VMs on a 3-node HC1150 cluster using LoginVSI as a benchmark for performance. We also validated an 3-node HC4150 cluster with 360 desktops with similar results. You can see a more detailed description of the reference architecture here. By adding more nodes, and even additional clusters, the capacity can be expanded almost infinitely but more importantly, just as much as you need, when you need it. We think these results speak for themselves as positioning this solution as a perfect fit for the midmarket, where HC3 already shines.

Maybe you’ve been considering VDI but have been hesitant because of the added complexity of having to create even more traditional virtualization infrastructure in your datacenter.  It doesn’t have to be that way.  Workspot and Scale Computing are both in the business of reducing complexity and cost to make these solutions more accessible and more affordable.  Just take a look and you’ll see why we continue to do things differently than everyone else.

Click here for the press release.

Screenshot 2016-07-13 09.34.07

What is Real Hyperconverged Infrastructure?


You’ve probably heard a multitude of things around hyperconvergence or hyperconverged infrastructure as these are becoming hot new industry buzzwords, but what do these terms really mean? Are vendors that say they have hyperconverged infrastructure really living up to the promises of true hyperconvergence or is it just marketing hype?

The terms “hyperconvergence” and “hyperconverged infrastructure” originated in a meeting between Jeff Ready and Jason Collier of Scale Computing and Arun Taneja of the Taneja Group. According to these three, the term was coined to mean the inclusion of a virtualization hypervisor into a complete infrastructure solution that included storage, compute, and virtualization. Some have thought that hyperconverged is synonymous with terms like ultraconverged or superconverged but that was not the intention.

If we hold this intended definition of hyperconvergence from its creators as the standard, what does it mean to be a real hyperconverged solution? Many solutions that call themselves hyperconverged rely on third party hypervisors such as VMware or Hyper-V for virtualization. The hypervisor software in that case is developed and licensed from a completely different vendor. That doesn’t seem to fit the definition of hyperconvergence at all.

Many vendors that use the hyperconverged label are merely peddling traditional virtualization architecture designed around traditional servers and SAN storage.  This 3-2-1 architecture that built a platform for virtualization from a minimum of 3 servers, 2 switches, and 1 SAN appliance has been repackaged by some as a single vendor solution without any real convergence at all, and definitely no hypervisor.  It is important to differentiate these traditional architectures from the next generation architectures that the term hyperconvergence was intended for.

Before hyperconvergence there was already a concept of converged infrastructure that combined server compute with storage as a single hardware platform onto which virtualization could be added. If a solution is not providing the hypervisor directly but relying on a third party hypervisor, it seems to fall back into the converged category, but not hyperconverged.

One of the key benefits of hyperconvergence is not having to rely on third party virtualization solutions, and being independent of the costs, complexity, and management of these third parties. The idea was no more hypervisor software licensing to pay for and one less vendor to deal with for support and maintenance. Hyperconvergence should be a complete virtualization and infrastructure solution from a single vendor. Maybe it is a marketing necessity for some vendor solutions to use a third party hypervisor to get to market due to funding. These solutions are not  fulfilling the promise of true hyperconvergence for their customers.

Hypervisors have been around for long enough now that they are now a commodity. The idea of having to license and pay for a hypervisor as a separate entity should be giving IT solution purchasers pause as they look to implement new solutions.  Cloud providers are not requiring customers to license hypervisors so why would so-called hyperconvergence vendors do this? We are hearing more and more from IT managers about their displeasure over the “Virtualization Tax” they’ve been paying for too long. The hype cycle for virtualization is over and users are ready to stop opening their checkbooks for an operating environment that should be included at no extra charge.

Screenshot 2016-07-13 09.34.07


My Experience at MES16

I recently had the pleasure of working my first trade show with Scale Computing at the Midsize Enterprise Summit (MES) in Austin, TX. I’ve worked and attended many trade shows in the past but I was unsure of what to expect because A) new company and coworkers and B) MES has a boardroom format I hadn’t seen before. Let me give you a preview of my summary: It was amazing.

As a premiere sponsor of the event, we had the opportunity to present our solution to all 13 of the boardrooms at the show. MES is a show that Scale Computing has attended regularly for years because of our midmarket focus. As we went from boardroom to boardroom, wheeling our live, running HC3 cluster, we encountered a great mix of attendees from ardent fans and friends, familiar faces, and new faces.

If you’ve been following along with Scale Computing, you know we’ve had a big year in terms of product releases and partnerships. We’ve rolled out, among other things, hybrid storage with flash tiering, DRaaS, VDI with Workspot, and we were able to announce a new partnership at MES with Information Builders for Scale Analytics. We were fortunate enough to have Ty Wang from Workspot with us to promote our joint VDI solution.


As Jeff Ready, our founder and CEO, presented in each boardroom, it was clear that the managers of midmarket IT were understanding our message. There was a definite sense that, like us, our peers working in IT administration were seeing the need for simple infrastrastructure that delivered solutions like virtualization, VDI, DR, analytics, as an alternative to traditional virtualization and vendors like VMware.

In the evenings, when the attendees were able to visit our booth, it was encouraging to hear from so many IT directors and managers that they’re fed up with the problems that our HC3 solution was solving and that our prices that we displayed in our booth were exceeding their expectations. It is really a testament to our entire team that our product and message seemed to resonate so strongly.

I will also note that there was another vendor, whom I will not name, at the show who offers what they call a hyperconverged infrastructure solution. That vendor really brought their “A” game with a much higher level of sponsorship than Scale Computing. This being my first show, I expected us to be overshadowed by their efforts. I couldn’t have been more wrong. When the attendee voting was tallied at the awards ceremony, we walked away with three awards including Best of Show.

Scenes from the 2016 Midsize Enterprise Summit

It was only one amazing trade show in the grand scheme of things, but it has really cemented in my mind that Scale Computing is changing IT for the midmarket with simplicity, scalability, and availability at the forefront of thought.

Screenshot 2016-07-13 09.34.07

Cloud Computing vs. Hyperconvergence

As IT departments look to move beyond traditional virtualization into cloud and hyperconverged infrastructure (HCI) platforms, they have a lot to consider. There are many types of organizations with different IT needs and it is important to determine whether those needs align more cloud or HCI. Before I dig into the differences, let me go over the similarities.

Both cloud and HCI tend to offer a similar user experience highlighted by ease of use and simplicity. One of the key features of both is simplifying the creation of VMs by automatically managing the pools of resources.  With cloud, the infrastructure is all but transparent as the actual physical host where the VM is running is far removed from the user. With live migration capabilities and auto provisioning of resources, HCI can provide nearly the same experience.

As for storage, software defined storage pooling has made storage management practically as transparent in HCI as it is in cloud.  In many ways, HCI is nearly a private cloud, but without the complexity of traditional underlying virtualization architecture, HCI makes  infrastructure management turnkey and lets administrators focus on the workloads and applications, just like the cloud, but keeps everything on prem and not managed by a third party.

Still, there are definite differences between cloud and HCI so let’s get to those. I like to approach these with a series of questions to help guide between cloud and on prem HCI.

Is your business seasonal?

  • If your business is seasonal, the pay as you go Opex pricing model of cloud might make more sense as well as the bursting ability of cloud.  If you need lots of computing power but only during short periods of the year, cloud might be best.  If you business follows a more typical schedule of steady business throughout the year with some seasonal bumps, then an on prem Capex investment in HCI might be the best option.

Do you already have IT staff?

  • If you already have IT staff managing an existing infrastructure that you are looking to replace, an HCI solution will be both easy to implement and will allow your existing staff to change focus from infrastructure management to implementing better applications, services, and processes. If you are currently unstaffed for IT, cloud might be the way to go since you can get a number of cloud based application services for users with very little IT administration needed.  You may need some resources to help make a variety of these services work together for your business, but it will likely be less than with an on prem solution.

Do you need to meet regulatory compliance on data?

  • If so, you are going to need to look into the implications of your data and services hosted and managed off site by a third party. You will be reliant on the cloud provider to provide the necessary security levels that meet compliance. With HCI, you have complete control and can implement any level of security because the solution is on prem.

Do you favor Capex or Opex?

  • Pretty simple here. Cloud is Opex. HCI can be Capex and is usually available for Opex as well through leasing options.  The cloud Opex is going to be less predictable because many of the costs are based on dynamic usage, where the Opex with HCI should be completely predictable with a monthly leasing fee. Considering further, the Opex for HCI is usually in the form of lease-to-own so it drops off dramatically once the lease period ends as opposed to cloud Opex which is perpetual.

Can you rely on your internet connection?

  • Cloud is 100% dependent on internet connectivity so if your internet connection is down, all of your cloud computing is unavailable. The internet connection becomes a single point of failure for cloud. With HCI, internet connection will not affect local access to applications and services.

Do you trust third party services?

  • If something goes wrong with cloud, you are dependent on the cloud provider to correct the issue. What if your small or medium sized cloud provider suddenly goes out of business? Whatever happens, you are helpless, waiting, like an airline passenger waiting on the tarmac for a last minute repair. With HCI, the solution is under your control and you can take action to get systems back online.

Let me condense these into a little cheat sheet for you.

Question Cloud HCI
Is your business seasonal? Yes No
Do you have IT staff? No Yes
Do you need to meet regulatory compliance on data? No Yes
Do you favor Capex or Opex? Opex Capex/Opex
Can you rely on your internet connection? Yes No
Do you trust third party services? Yes No

One last consideration that I don’t like to put into the question category is the ability to escape the cloud if it doesn’t work out. Why don’t I like to make it a question? Maybe I just haven’t found the right way to ask it without making cloud sound like some kind of death trap for your data, and I’m not trying to throw cloud under the bus here. Cloud is a good solution where it fits. That being said, it is still a valid consideration.

Most cloud providers have great onboarding services to get your data to the cloud more efficiently but they don’t have any equivalent to move you off.  It is not in their best interest. Dragging all of your data back out of the cloud over your internet connection is not a project anyone would look forward to. If all of your critical data resides in the cloud, it might take a while to get it back on prem. With HCI it is already on prem so you can do whatever you like with it at local network speeds.

I hope that helps those who have been considering a choice between cloud and HCI for their IT infrastructure. Until next time.

Screenshot 2016-07-13 09.34.07

It’s Not Easy Being Different


Thinking outside the box. Paradigm shift. Innovation.  As tired as these words and phrases are, they are ideas that we still strive to embody in technology development. But what happens when persons or companies actually embrace these ideas in principle and create solutions that don’t fit into the industry molds?

At Scale Computing, we challenged the traditional thinking on infrastructure design and architecture by putting forward a solution based on the idea that IT infrastructure should be so simple that anyone could manage it, with any level of experience. This idea goes against decades of IT managers being put through weeks of training and certification to manage servers, storage, and most recently, virtualization. Our idea is that infrastructure should not need multiple control panels filled with nerd knobs that must be monitored ad nauseam, but that the expertise of IT administrators should be focused on applications and business processes.

Our solution, the HC3 virtualization platform, targets midmarket customers because that is where the complexity of infrastructure architecture hits administrators the hardest. In the larger enterprise IT departments, there can be multiple teams dedicated to the different silos of infrastructure maintenance and management but in the midmarket there can be as little as one administrator in charge of infrastructure. Our founders are former IT administrators who understand the challenges and pains of managing infrastructure.

Very simply, we want to eliminate complexity from infrastructure in IT where it causes the most disruption. Complexity adds a number of costs including  downtime due to failure, training and consulting fees for expertise, and extra labor costs for implementation, management, and maintenance. In order to maximize the benefit of simplicity for our target market, we specifically designed our solution without the extra complexity required by larger enterprise organizations.  The result is a simple hardware and virtualization platform that can be implemented with running VMs in under an hour, is fully redundant and resilient against hardware failures, virtually eliminates both planned and unplanned downtime, and can be scaled out quickly and easily. Basically, you get a nearly turnkey infrastructure that you rarely have to manage.

This whole idea seems very straightforward to us here at Scale Computing and our customers certainly seem to get it. As for the rest of the industry, including the analysts, our focus on the midmarket seems to be viewed as a liability. We purposefully do not have all of the features and complexity of solutions that target the larger enterprise customers. The HC3 platform does not scale out to the same level as other solutions that target the larger enterprise.  We do not support VMware with the specific goal of not burdening our customers with third party hypervisor licensing.  We include the hypervisor at no additional charge.

We are different for a reason, but to the analysts those differences do not line up with the checklists they have developed over the years of looking at enterprise solutions. The simplicity of the HC3 does not equate with analysts as visionary and forward thinking because of their enterprise focus when it comes to IT infrastructure. We’ll probably never find favor with analyst reviews unless we sell into a Fortune 500 customer, and that just isn’t on our horizon. Instead we’ll keep focusing on the solution we provide for the midmarket to simplify infrastructure, disaster recovery, VDI, and distributed enterprise.

Are we an underdog in our market?  Maybe.  But you could probably say the same about the companies we target with our solutions. They aren’t the industry giants, but rather the smaller guys who are driving local economic growth, who are nimble and innovative like we are, and who appreciate the value of a solution that actually solves the real problems that have plagued IT departments for decades.  It’s not easy being different but no one said starting an IT revolution would be easy.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101 – Part 3

This is my third and final post in this series. I’ve covered SAN and server virtualization and now I’d like to share my thoughts on the challenges of SMB IT shops vs enterprise IT.

To start, I should probably give some context on the size of an SMB IT shop. Since we are talking about infrastructure, I am really referring to IT departments that have less than a handful of administrators assigned to infrastructure, with the most common IT shop allocating only one or two resources to infrastructure. Since the makeup of businesses varies so much in terms of numbers of IT users vs. external services, etc, all of the lines do get a little blurred. It is not a perfect science but here’s hoping my points will be clear enough.


Small and medium businesses, sometimes referred to as small and midmarket, have some very unique challenges compared to larger enterprise customers. One of those challenges is being a jack of all trades, master of none.  Now, there are some very talented and dedicated administrators out there who can master many aspects of IT over time but often the day to day tasks of keeping the IT ship afloat make it impossible for administrators to gain expertise in any particular area. There just isn’t the budget nor training time to have enough expertise on staff. Without a large team of persons who bring together many types of expertise, administrators must make use of technology solutions that help them do more with less.

Complexity is the enemy of the small IT department during all phases of the solution lifecycle including implementation, management, and maintenance. Complex solutions that combine a number of different vendors and products can be more easily managed in the enterprise but become a burden on smaller IT shops that must stretch their limited knowledge and headcount. Projects then turn into long nights and weekends and administrators are still expected to manage normal business hour tasks. Some administrators use scripting to automate much of their IT management and end up with a highly customized environment that becomes hard to migrate away from when business needs evolve.

Then there is the issue of brain drain. Smaller IT shops cannot easily absorb the loss of key administrators who may be the only ones intimately familiar with how all of the systems interconnect and operate.  When those administrators leave for whatever reason, suddenly at times, they leave a huge gap in knowledge that cannot easily be filled.  This is much less of a problem in the enterprise where an individual administrator is one of a team and has many others who can fill in that gap.  The loss of a key administrator in the SMB can be devastating to the IT operations going forward.

To combat brain drain in the SMB, those IT shops benefit from fewer vendors and products to simplify the IT environment, requiring less specialized training and with the ability of a new administrator quickly coming up to speed on the technology in use.  High levels of automation built in to the vendor solution for common IT tasks and simple, unified management tools help the transition from one administrator to the next.

For SMB, budgets can vary wildly from shoestring on up.  The idea of doing more with less is much more on the minds of SMB administrators.  SMBs are not as resilient to unexpected costs associated with IT disasters and other types of unexpected downtime. Support is one of the first lines of insurance for SMBs and dealing with multiple vendors and support run-around can be paralyzing at those critical moments, especially for SMBs who could not budget for the higher levels of support.  Having resilient, reliable infrastructure with responsive, premium support can make a huge difference in protecting SMBs from various types of failure and disaster that could be critical to business success.

Ok, enough about the SMB, time to  discuss the big guys.


Both SMB and enterprise organizations have processes, although the level of reliance on process in much higher in the enterprise.  An SMB organization can typically adapt process easily and quickly to match technology, where an enterprise organization can be much more fixed in process and technology must be changed to match the process. The enterprise therefore employs a large number of administrators, developers, consultants, and other experts to create complex systems to support their business processes.

The enterprise can withstand more complexity because they are able to have more experts on staff who can focus management efforts on single silos of infrastructure such as storage, servers, virtualization, security, etc.  With multiple administrators assigned to each silo, there is guaranteed management coverage to deal with any unexpected problems.  Effectively, the IT department (or departments) in the enterprise have a high combined level of expertise and manpower, or have the budget to bring in outside consultants and service providers to fill these gaps as a standard practice.

Unlike with SMB, simplicity is not necessarily a benefit to the enterprise since they need the flexibility to adapt to business process.  Infrastructure can therefore be a patchwork of systems serving different needs from high performance computing, data warehousing, data distribution, disaster recovery, etc. Solutions for these enterprise operations must be extensible and adaptable to the user process to meet the compliance and business needs of these organizations.

Enterprise organizations are usually big enough that they can tolerate different types of failures better than SMB, although as we have seen in recent news, even companies like Delta Airlines are not immune to near catastrophic failures.  Still, disk failures or server failures that could bring an SMB to a standstill might barely cause a ripple in a large enterprise given the size of their operations.


The SMB benefits from infrastructure simplicity because it helps eliminate a number of challenges and unplanned costs.  For the enterprise, the focus is more on flexibility, adaptability, and extensibility where business processes reign supreme. IT challenges can be more acute in the SMB simply because the budgets and resources are more limited in both headcount and expertise. Complex infrastructure designed for the enterprise is not always going to translate into effective or viable solutions for SMB. Solution providers need to be aware that the SMB may need more than just a scaled down version of an enterprise solution.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101 – Part 2

I covered SAN technology in my last Infrastructure 101 post, so for today I’m going to cover server virtualization and maybe delve into containers and cloud.

Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let’s look at a bit of history.

Early on, computing was primarily done on mainframes, which were monolithic machines designed to provide all of the computing necessary for an organization. They were designed to share resources among various parallel processes to accommodate multiple users. As computing needs grew, organization began to move away from the monolithic architecture of the mainframe to hosting multiple physical servers that were less expensive and that would run one or more applications for multiple users. Physical servers could range in size and capacity from very large, rivaling mainframes, down to very small, resembling personal computers.  

While mainframes never disappeared completely, the flexibility in cost and capacity of physical servers made them an infrastructure of choice across all industries. Unfortunately, as computing needs continued to grow, organizations began needing more and more servers, and more administrators to manage the servers. The size of server rooms, along with the power and cooling needs were honestly becoming ridiculous.

There were a number of technologies that emerged resembling what we now call server virtualization that allowed the compute and storage resources of a single physical box to be divided among different virtualized servers, but those never became the mainstream. Virtualization didn’t really take off until hypervisor technology for the x86 platform came around, which happened at the same time as other platforms were declining in the server market.  

Initially, virtualization was not adopted for production servers but instead was used extensively for testing and development because it lacked some of the performance and stability needed for production servers. The widespread use for test and dev eventually led to improvements that made administrators confident with its use on production servers. The combination of performance improvements along with clustering to provide high availability for virtual machines open the door for widespread adoption for production servers.  

The transition to virtualization was dramatic, reducing server rooms that once housed dozens and dozens of server racks to only a handful of server racks for the host servers and storage on which all of the same workloads ran. It is now difficult to find an IT shop that is still using physical servers as their primary infrastructure.

While there were many hypervisors battling to become the de facto solution, a number of hypervisors were adopted including Xen and KVM (both open source), Hyper-V, and VMware ESX/ESXi which took the lion’s share of the market.  Those hypervisors or their derivatives continue to battle for marketshare today, after more than a decade. Cloud platforms have risen, built over each of these hypervisors, adding to the mystery of whether a de facto hypervisor will emerge.  But maybe it no longer matters.

Virtualization has now become a commodity technology. It may not seem so to VMware customers who are still weighing various licensing options, but server virtualization is pretty well baked and the innovations have shifted to hyperconvergence, cloud, and container technologies. The differences between hypervisors are few enough that the buying decisions are often based more on price and support than technology at this point.

This commoditization of server virtualization does not necessarily indicate any kind of decline in virtualization anytime soon, but rather a shift in thinking from traditional virtualization architectures. While cloud is driving innovation in multi-tenancy and self-service, hyperconvergence is fueling innovation in how hardware and storage can be designed and used more efficiently by virtual machines (as per my previous post about storage technologies).

IT departments are beginning to wonder if the baggage of training and management infrastructures for server virtualization are still a requirement or if, as a commodity, server virtualization should no longer be so complex. Is being a virtualization expert still a badge of honor or is it now a default expectation for IT administrators? And with hyperconvergence and cloud technologies simplifying virtual machine management, what level of expertise is really still required?

I think the main take away from the commoditization of server virtualization is that as you move to hyperconvergence and cloud platforms, you shouldn’t need to know what the underlying hypervisor is, nor should you care, and you definitely shouldn’t have to worry about licensing it separately.  They say you don’t understand something unless you can explain it to a 5 year old.  It is time for server virtualization to be easy enough that a 5 year old can provision virtual machines instead of requiring a full time, certified virtualization expert. Or maybe even a 4 year old.

Screenshot 2016-07-13 09.34.07