×
×

Back to School – Infrastructure 101 – Part 2

I covered SAN technology in my last Infrastructure 101 post, so for today I’m going to cover server virtualization and maybe delve into containers and cloud.

Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let’s look at a bit of history.

Early on, computing was primarily done on mainframes, which were monolithic machines designed to provide all of the computing necessary for an organization. They were designed to share resources among various parallel processes to accommodate multiple users. As computing needs grew, organization began to move away from the monolithic architecture of the mainframe to hosting multiple physical servers that were less expensive and that would run one or more applications for multiple users. Physical servers could range in size and capacity from very large, rivaling mainframes, down to very small, resembling personal computers.  

While mainframes never disappeared completely, the flexibility in cost and capacity of physical servers made them an infrastructure of choice across all industries. Unfortunately, as computing needs continued to grow, organizations began needing more and more servers, and more administrators to manage the servers. The size of server rooms, along with the power and cooling needs were honestly becoming ridiculous.

There were a number of technologies that emerged resembling what we now call server virtualization that allowed the compute and storage resources of a single physical box to be divided among different virtualized servers, but those never became the mainstream. Virtualization didn’t really take off until hypervisor technology for the x86 platform came around, which happened at the same time as other platforms were declining in the server market.  

Initially, virtualization was not adopted for production servers but instead was used extensively for testing and development because it lacked some of the performance and stability needed for production servers. The widespread use for test and dev eventually led to improvements that made administrators confident with its use on production servers. The combination of performance improvements along with clustering to provide high availability for virtual machines open the door for widespread adoption for production servers.  

The transition to virtualization was dramatic, reducing server rooms that once housed dozens and dozens of server racks to only a handful of server racks for the host servers and storage on which all of the same workloads ran. It is now difficult to find an IT shop that is still using physical servers as their primary infrastructure.

While there were many hypervisors battling to become the de facto solution, a number of hypervisors were adopted including Xen and KVM (both open source), Hyper-V, and VMware ESX/ESXi which took the lion’s share of the market.  Those hypervisors or their derivatives continue to battle for marketshare today, after more than a decade. Cloud platforms have risen, built over each of these hypervisors, adding to the mystery of whether a de facto hypervisor will emerge.  But maybe it no longer matters.

Virtualization has now become a commodity technology. It may not seem so to VMware customers who are still weighing various licensing options, but server virtualization is pretty well baked and the innovations have shifted to hyperconvergence, cloud, and container technologies. The differences between hypervisors are few enough that the buying decisions are often based more on price and support than technology at this point.

This commoditization of server virtualization does not necessarily indicate any kind of decline in virtualization anytime soon, but rather a shift in thinking from traditional virtualization architectures. While cloud is driving innovation in multi-tenancy and self-service, hyperconvergence is fueling innovation in how hardware and storage can be designed and used more efficiently by virtual machines (as per my previous post about storage technologies).

IT departments are beginning to wonder if the baggage of training and management infrastructures for server virtualization are still a requirement or if, as a commodity, server virtualization should no longer be so complex. Is being a virtualization expert still a badge of honor or is it now a default expectation for IT administrators? And with hyperconvergence and cloud technologies simplifying virtual machine management, what level of expertise is really still required?

I think the main take away from the commoditization of server virtualization is that as you move to hyperconvergence and cloud platforms, you shouldn’t need to know what the underlying hypervisor is, nor should you care, and you definitely shouldn’t have to worry about licensing it separately.  They say you don’t understand something unless you can explain it to a 5 year old.  It is time for server virtualization to be easy enough that a 5 year old can provision virtual machines instead of requiring a full time, certified virtualization expert. Or maybe even a 4 year old.

Screenshot 2016-07-13 09.34.07

HEAT Up I/O with a Flash Retrofit

If your HC3 workloads need better performance and faster I/O, you can soon take advantage of flash storage without having to replace your existing cluster nodes. Scale Computing is rolling out a service to  help you retrofit your existing HC2000 /2100 or HC4000/4100 nodes with flash solid state drives (SSD) and update your HyperCore version to start using hybrid flash storage without any downtime. You can get the full benefits of HyperCore Enhanced Automated Tiering (HEAT) in HyperCore v7 when you retrofit with flash drives.

You can read more about HEAT technology in my blog post Turning Hyperconvergence to 11

Now, before you start ordering your new SSD drives for flash storage retrofit, let’s talk about the new storage architecture designed to include flash. You may already be wondering how much flash storage you need and how it can be divided among the workloads that need it, or even how it will affect your future plans to scale out with more HC3 nodes.

The HC3 storage system uses wide striping across all nodes in the cluster to provide maximum performance and availability in the form of redundancy across nodes.  With all spinning disks, any disk was a candidate for redundant writes from other nodes.  With the addition of flash, redundancy is intelligently segregated between flash and spinning disk storage to maximize flash performance.   

A write to a spinning disk will be redundantly written to a spinning disk on another node, and a write to an SSD will be redundantly written to an SSD on another node. Therefore, just as you need at least three nodes of storage and compute resources in an HC3 cluster, you need to a minimum of three nodes with SSD drives to take advantage of flash storage.  

Consider also, with retrofitting, that you will be replacing an existing spinning disk drive with the new SSD. The new SSD may be of different capacity that the disk it is replacing which might affect your overall storage pool capacity. You may already in a position to add overall capacity where larger SSD drives are the right fit or adding an additional flash storage node along with the retrofit is the right choice.  You can get to the three node minimum of SSD nodes by any combination of retrofitting or adding new SSD tiered nodes to the cluster.

Retrofitting existing clusters is being provided as a service which will include our Scale Computing experts helping you assess your storage needs to determine the best plan for you to incorporate flash into your existing HC3 cluster. Whether you have a small, medium, or large cluster implementation, we will assist you in both planning and implementation to avoid any downtime or disruption.

However you decide to retrofit and implement flash storage in your HC3 cluster, you will immediately begin seeing the benefits as new data is written to high performing flash and high I/O blocks from spinning disk are intelligently moved to flash storage for better performance. Furthermore, you have full control of how SSD is used on a per virtual disk basis. You’ll be able to adjust the level of SSD usage on a sliding scale to take advantage of both flash and spinning disk storage where you need each most. It’s the flash storage solution you’ve been waiting for.

Don’t hesitate to contact your Scale Computing representatives to ask for more information on HC3 flash storage today.

Screenshot 2016-07-13 09.34.07

Back to School – Infrastructure 101

As a back to school theme, I thought I’d share my thoughts on infrastructure over a series of posts.  Today’s topic is SAN.

Storage Area Networking (SAN) is a technology that solved a real problem that existed a couple decades ago. SANs have been a foundational piece of IT infrastructure architecture for a long time and have helped drive major innovations in storage.  But how relevant are SANs today in the age of software-defined datacenters? Let’s talk about how we have arrived at modern storage architecture.

First, disk arrays were created to house more storage than could fit into a single server chassis.  Storage needs were outpacing the capacity of individual disks and the limited disk slots available in servers.  But adding more disk to a single server led to another issue, available storage capacity was trapped within each server.  If Server A needed more storage and Server B had a surplus, the only way to redistribute was to physically remove a disk from Server B and add it to Server A.  This was not always so easy because it might be breaking up a RAID configuration or there simply might not be the controller capacity for the disk on Server A.  It usually meant ending up with a lot of over-provisioned storage, ballooning the budget.

SANs solved this problem by making a pool of storage accessible to servers across a network. It was revolutionary because it allowed LUNs to be created and assigned more or less at will to servers across the network. The network was fibre channel in the beginning because ethernet LAN speeds were not quite up to snuff for disk I/O. It was expensive and you needed fibre channel cards in each server you needed connected to the SAN, but it still changed the way storage was planned in datacenters.

Alongside SAN, you had Network Attached Storage (NAS) which had even more flexibility than SAN but lacked the full storage protocol capabilities of SAN or Direct Attached Storage.  Still, NAS rose as a file sharing solution alongside SAN because it was less expensive and used ethernet.

The next major innovation was iSCSI which originally debuted before it’s time. The iSCSI protocol allowed SANs to be used over standard ethernet connections. Unfortunately the ethernet networks took a little longer to become fast enough for iSCSI to take off but eventually it started to replace fibre channel networks for SAN as 1Gb and 10Gb networks became accessible. WIth iSCSi, SANs became even more accessible to all IT shops.

The next hurdle for SAN technology was the self-inflicted. The problem was that now an administrator might be managing 2 or more SANs on top of NAS and server-side Direct Attached Storage (DAS), and these different components did not play well together necessarily. There were so many SANs and NAS vendors that used proprietary protocols and management tools that it was once again a burden on IT.  Then along came virtualization.

The next innovation was virtual SAN technology. There were two virtualization paths that affected SANs. One path was trying to solve the storage management problem I had just mentioned, and the other path was trying to virtualize the SAN within hypervisors for server virtualization. These paths eventually crossed as virtualization became the standard.

Virtual SAN technology initially grew from outside SAN, not within, because SAN was big business and virtual SAN technology threatened traditional SAN.  When approaching server virtualization, though, virtualizing storage was a do or die imperative for SAN vendors. Outside of SAN vendors, software solutions were seeing the possibility with iSCSI protocols to place a layer of virtualization over SAN, NAS, and DAS and create a single, virtual pool of storage. This was a huge step forward in accessibility of storage but it came at a cost of both having to purchase the virtual SAN technology on top of the existing SAN infrastructure, and at a cost of efficiency because it effectively added another, or in some cases, multiple more layers of I/O management and protocols to what already existed.

When SANs (and NAS) were integrated into server virtualization, it was primarily done with Virtual Storage Appliances that were virtual servers running the virtual SAN software on top of the underlying SAN architecture.  With at least one of these VSAs per virtual host, the virtual SAN architecture was consuming a lot of compute resources in the virtual infrastructure.

So virtual SANs were a mess. If it hadn’t been for faster CPUs with more cores, cheaper RAM, and flash storage, virtual SANs would have been a non-starter based on I/O efficiency. Virtual SANs seemed to be the way things were going but what about that inefficiency?  We are now seeing some interesting advances in software-defined storage that provide the same types of storage pooling as virtual SANs but without all of the layers of protocol and I/O management that make it so inefficient.

With DAS, servers have direct access to the hardware layer of the storage, providing the most efficient I/O path outside of raw storage access.  The direct attached methodology can and is being used in storage pooling by some storage technologies like HC3 from Scale Computing. All of the baggage that virtual SANs brought from traditional SAN architecture and the multiple layers of protocol and management they added don’t need to exist in a software-defined storage architecture that doesn’t rely on old SAN technology.  

SAN was once a brilliant solution to a real problem and had a good run of innovation and enabling the early stages of server virtualization. However, SAN is not the storage technology of the future and with the rise of hyperconvergence and cloud technologies, SAN is probably seeing its sunset on the horizon.

Screenshot 2016-07-13 09.34.07

Don’t Double Down on Infrastructure – Scale Out as Needed

There has long been a philosophy in IT infrastructure that whenever you add capacity, you add plenty of room to grow into. This idea is based on traditional architecture that was complex, consisting of many disparate systems held together by the rigorous management of the administrators. The process of scaling out capacity has been a treacherous one that takes weeks or months of stress-filled nights and weekends. These projects are so undesirable that administrators and anyone else involved would rather spend more than they like to buy more capacity than they need to avoid scaling out again as long as possible.

There are a number of reasons why IT departments may need to scale out. Hopefully it is because of growth of the business which usually coincides with increased budgets.  It could be that business needs have shifted to require more IT services, demanding more data, more computing, and thus more capacity. It could be that the current infrastructure was under-provisioned in the first place and creating more problems than solutions. Whatever the case, sooner or later, everyone needs to scale out.

The traditional planning process for scaling out involves first looking at where the capacity is bottlenecking. It could be storage, CPU, RAM, networking, and any level of caching or bussing in between. More than likely it is not just one of these but several which causes many organizations to simply hit the reset button and just replace everything, if they can afford it, that is. Then they implement the new infrastructure to just have to go through the same process a few years down the line. Very costly. Very inefficient.

Without replacing the whole infrastructure, administrators must look to the various pieces of their infrastructure that might need refreshed or upgraded. This process can seem like navigating a minefield of unforeseen consequences. Maybe you want to swap out disks in the SAN for faster, larger disks. Can the storage controllers handle the increased speed and capacity? What about the network? Can it handle the increased I/O from faster and deeper storage? Can the CPUs handle it? Good administrators can identify at least some of these dependencies during planning but it can often take a team of experts to fully understand the complexities and then sometimes only through testing and some trial and error.

Exhausted yet? Fortunately, this process of scaling out has been dramatically simplified with hyperconverged infrastructure.  With a clustered, appliance-based architecture, capacity can be added very quickly. For example, with HC3 from Scale Computing, a new appliance can be added to a cluster within minutes, with resources then immediately available, adding RAM, CPU, and storage capacity to the infrastructure.

HC3 even lets you mix and match different appliances in the cluster so that you can add just the capacity you need. Adding the new appliance to the cluster (where it is then called a “node”, of course) is as simple as racking and cabling it and then assigning it with network settings and pointing it at the cluster. The capacity is automatically absorbed into the cluster and the storage added seamlessly to the overall storage pool.

This all means that with hyperconverged infrastructure, you do not need to buy capacity for the future right now. You can get just what you need now (with a little cushion of course), and scale out simply and quickly when you need to in the future. The traditional complexity of infrastructure architecture is now the real bottleneck of capacity scale out.  Hyperconverged Infrastructure is the solution.

Screenshot 2016-07-13 09.34.07

7 Reasons Why I Work at Scale Computing

I came from a background in software that has spanned software testing, systems engineering, product marketing, and product management, and this year my career journey brought me to Scale Computing as the Product Marketing Manager. During the few months I have been with Scale, I’ve been amazed by the hard work and innovation embodied in this organization.  Here are some of the reasons I joined Scale and why I love working here.

1 – Our Founding Mission

Our founders are former IT administrators who understand the challenges faced by IT departments with limited budgets and staff. They wanted to reinvent IT infrastructure to solve those challenges and get IT focused on applications. That’s why they helped coin the term “hyperconverged infrastructure”.

2 – Focus on the Administrator

Our product family, HC3, was always designed to address the needs of datacenters managed by as few as one administrator by combining features and efficiency achieved in enterprise solutions for any budget.  HC3 scales from small to enterprise because its roots are planted in the needs of the individual administrator focused on keeping applications available.

3 – Second to None Support

I have a firm belief that good support is the cornerstone of successful IT solutions. Our world class support not only includes hardware replacement but 24/7/365 phone support from only qualified experts.  We don’t offer any other level of support because we believe every customer, no matter their size or budget, deserves the same level of support.

4 – 1500+ Customers, 5500+ Installs

Starting in 2008 and bringing HC3 to market in 2012, we’ve sold to customers in nearly every industry from manufacturing, education, government, healthcare, finance, hotel/restaurant, and more.  Customer success is our driving force. Our solution is driving that success.

5 – Innovative Technology

We designed the HC3 solution from the ground up.  Starting with the strengths of open source KVM virtualization, we developed our own operating system called HyperCore which includes our own block access, direct attached storage system with SSD tiering for maximum storage efficiency. We believe that if it is worth doing then it is worth doing the right way.

6 – Simplicity, Scalability, and Availability

These core ideas keep us focused on reducing costs and management when it comes to deployment, software and firmware updates, capacity scaling, and minimizing planned and unplanned downtime.  I believe in our goal to minimize the cost and management footprint of infrastructure to free up resources for application management and service delivery in IT.

7 – Disaster Recovery, VDI, and Distributed Enterprise

HC3 is more than just a simple infrastructure solution. It is an infrastructure platform that supports multiple use cases including disaster recovery sites, virtual desktop infrastructure, and remote office and branch office infrastructure. I love that the flexibility of HC3 allows it to be used in nearly every type of industry.

Scale Computing is more than just an employer; it is a new passion for me. I hope you keep following my blog posts to learn more about the awesome things we are doing here at Scale and I hope we can help you bring your datacenter into the new hyperconvergence era. If you have any questions or feedback about my blog posts, hyperconvergence, or Scale Computing, you can contact me at dpaquette@scalecomputing.com.

Screenshot 2016-07-13 09.34.07

Hyperconvergence for the Distributed Enterprise

IT departments face a variety of challenges but maybe none as challenging as managing multiple sites. Many organizations must provide IT services across dozens or even hundreds of small remote offices or facilities. One of the most common organizational structures for these distributed enterprises is a single large central datacenter where IT staff are located supporting multiple remote offices where personnel have little or no IT expertise.

These remote sites often need the same variety of application services and data services needed in the central office, but on a smaller scale. To run these applications, these sites need multiple servers, storage solutions, and disaster recovery. There is no IT staff on site so remote management is ideal to cut down on the productivity cost of sending IT staff to remote sites frequently to troubleshoot issues. This is where the turn key appliance approach of hyperconvergence shines.

A hyperconverged infrastructure solution combines server, storage, and virtualization software into a single appliance that can be clustered for scalability and high availability. It eliminates the complexity of having disparate server hardware, storage hardware, and virtualization software from multiple vendors and having to try to replicate the complexity of that piecemeal solution at every site.  Hyperconverged infrastructure provides a simple repeatable infrastructure out of the box.  This approach makes it easy to scale out infrastructure at sites on demand from a single vendor.

At Scale Computing, we offer the HC3 solution that truly combines server, storage, virtualization, and even disaster recovery and high availability. We provide a large range of hardware configurations to support very small implementations all the way up to full enterprise datacenter infrastructure. Also, because any of these various node configurations can be mixed and matched with other nodes, you can scale the infrastructure at a site with extra capacity and/or compute power as you need very quickly.

HC3 management is all web-based so sites can easily be managed remotely. From provisioning new virtual machines to opening consoles for each VM for simple and direct management from the central datacenter, it’s all in the web browser. There is even a reverse SSH tunnel available for ScaleCare support to provide additional remote management of lower level software features in the hypervisor and storage system. Redundant hardware components and self healing mean that hardware failures can be absorbed while applications remain available until IT staff or local staff can replace hardware components.  

With HC3, replication is built in to provide disaster recovery and high availability back to the central datacenter in the event of entire site failure. Virtual machines and applications can be back up and running within minutes to allow remote connectivity from the remote site as needed. You can achieve both simplified infrastructure and remote high availability in a single solution from a single vendor. One back to pat or one throat to choke, as they say.

If you want to learn more about how hyperconvergence can make distributed enterprise simpler and easier, talk to one of our hyperconvergence experts.

Screenshot 2016-07-13 09.34.07


4 Hidden Infrastructure Costs for the SMB

Infrastructure complexity is not unique to enterprise datacenters. Just because a business or organization is small does not mean it is exempt from the feature needs of big enterprise datacenters. Small and mid-size organizations require fault tolerance, high availability, mobility, and flexibility as much as anyone. Unfortunately, the complexity of traditional datacenter and virtualization architecture hit the SMB the hardest. Here are 4 of the hidden costs that can cripple the SMB IT budget.

1 – Training and Expertise

Setting up a standard virtualization infrastructure can be complex; it requires virtualization, networking, and storage expertise. In the larger enterprises, expertise is often spread out across dozens of admins through new hire, formal training, or consulting. However, in the SMB data center with only a handful or even only one admin and limited budgets, expertise can be harder to come by. Self-led training and research can take costly hours out of every week and admins may only have time to achieve a minimum level of expertise to maintain an infrastructure without the ability to optimize it.  Lack of expertise affects infrastructure performance and stability, not allowing for the most return on infrastructure investment.

2 – Support Run-Around

A standard virtualization infrastructure has components from a number of different vendors including the storage vendor, server vendor, and hypervisor vendor to name just the basics.  Problems arising in the infrastructure are not always easy to diagnose and with multiple vendors and vendor support centers in the mix, this can lead to a lot of finger pointing.  Admins can spend hours if not days calling various support engineers from different vendors to pinpoint the issue.  Long troubleshooting times can correspond to long outages and lost productivity because of vendor support run-around.

3 – Admin Burn-Out

The complexity of standard virtualization environments containing multiple vendor solutions and multiple layers of hardware and software mean longer nights and weekends performing maintenance tasks such as firmware updates, refreshing hardware, adding capacity, and dealing with outages caused by non-optimized architecture. Not to mention, admins of the complex architectures cannot detach long enough to enjoy personal time off because of the risk of outage.  Administrators who have to spend long nights and weekends dealing with infrastructure issues are not as productive in daily tasks and have less energy and focus for initiatives to improve process and performance.

4 – Brain Drain

Small IT shops are particularly susceptible to brain drain. The knowledge of all of the complex hardware configurations and application requirements is concentrated in a very small group, in some cases one administrator. While those individuals are around, there is no problem but when one leaves for whatever reason, there is a huge gap in knowledge which might never be replaced. There can be huge costs involved in rebuilding the knowledge or redesigning systems to match the expertise of the remaining or replacement staff.

Although complexity has hidden costs for all small, medium, and enterprise datacenters, the complexity designed for the enterprise and inherited down into the SMB makes those costs more acute. When choosing an infrastructure solution for a small or mid-size datacenter, it is important to weigh these hidden costs against the cost of investing in solutions that offer automation and management that mitigate the need for expertise, support run-around, and after hours administration. Modern hyperconverged infrastructures like HC3 from Scale Computing offer simplicity, availability, and scalability to eliminate hidden infrastructure costs.

Screenshot 2016-07-13 09.34.07

The VSA is the Ugly Result of Legacy Vendor Lock-Out

VMWare and Hyper-V with the traditional Servers+Switches+SAN architecture –  widely adopted by enterprise and the large mid-market – works. It works relatively well, but it is complex (many moving parts, usually from different vendors), necessitates multiple layers of management (server, switch, SAN, hypervisor), and requires the use of storage protocols to be functional at all.  Historically speaking, this has led to either the requirement of many people from several different IT disciplines to effectively virtualize and manage a VMWare/Hyper-V based environment effectively, or to smaller companies taking a pass on virtualization as the soft and hard costs associated with it put HA virtualization out of reach.

legacy

With the advent of Hyperconvergence in the modern datacenter, HCI vendors had a limited set of options when it came to the shared storage part of the equation. Lacking access to the VMKernel and NTOS kernel, they could only either virtualize the entire SAN and run instances of it as a VM on each node in the HCI architecture (horribly inefficient), or move to hypervisors that aren’t from VMWare or Microsoft. The first choice is what most took, even though it has a very high cost in terms of resource efficiency and IO path complexity as well as nearly doubling the hardware requirements of the architecture to run it. They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access. Likewise, they found this approach (known as VSA or Virtual SAN Appliance) to be easier than tackling the truly difficult job of building an entire architecture from the ground up, clean sheet style.

The VSA approach – virtualize the SAN and its controllers – also known as pulling the SAN into the servers. The VSA or Virtual San Appliance approach was developed to move the SAN up into the host servers through the use of a virtual machine on each box. This did in fact simplify things like implementation and management by eliminating the separate physical SAN (but not its resource requirements, storage protocols, or overhead – in all actuality, it reduplicates those bits of overhead on every node, turning one SAN into 3 or 4 or more). However, it didn’t do much to simplify the data path.  In fact, quite the opposite. It complicated the path to disk by turning the IO path from:

application->RAM->disk

into :

application->RAM->hypervisor->RAM->SAN controller VM->RAM-> hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk->network to next node->RAM->hypervisor->RAM->SAN controller VM->RAM->hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk.

This approach uses so much resource that one could run an entire SMB to MidMarket datacenter on just the CPU and RAM being allocated to these VSA’s

VSA

This “stack dependent” approach did, in fact, speed up the time-to-market equation for the HCI vendors that implement it, but due to the extra hardware requirements, extra burden of the IO path, and use of SSD/flash primarily as a caching mechanism for the now tortured IO path in use, this approach still brought a solution in at a price point and complexity level out of reach of the modern SMB.

HCI done the right way – HES

The right way to do an HCI architecture is to take the exact opposite path than all of the VSA based vendors. From a design perspective, the goal of eliminating the dedicated servers, storage protocol overhead, resources consumed, and associated gear is met by moving the hypervisor directly into the OS of a clustered platform that runs storage directly in userspace adjacent to the kernel (known as HES or in-kernel).  This leverages direct I/O, thereby simplifying the architecture dramatically while regaining the efficiency originally promised by virtualization.

scribe

This approach turns the IO path back into :

application -> RAM -> disk -> backplane -> disk

This complete stack owner approach, in addition to regaining the efficiency promised by HCI, allows for features and functionalities (that historically had to be provided by third parties in the legacy and VSA approaches) to be built directly into the platform, allowing for true single vendor solutions to be implemented and radically simplifying the SMB/SME data center at all levels – lower cost of acquisition, lower total TCO. This makes HCI affordable and approachable to the SMB and Mid-Market. This eliminates the extra hardware requirements, the overhead of SAN, and the overhead of storage protocols and re-serialization of IO. This returns efficiency to the datacenter.

When the IO Path is compared side by side, the differences  in the overhead and the efficiency become obvious, and the penalties and pain caused by legacy vendor lock-in start to really stand out, with VSA based approaches (in a basic 3 node implementation) using as much as 24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters.

diff

Flash: The Right Way at the Right Price

As much as I wish I could, I’m not going to go into detail on how flash is implemented in HC3 because, frankly, the I/O heat mapping we use to move data between flash SDD and spinning HDD tiers is highly intelligent and probably more complex than what I can fit in a reasonable blog post. However, I will tell you why the way we implement flash is the right way and how we are able to offer it in an affordable way.

(Don’t worry, you can read about how our flash is implemented in detail in our Theory of Operations by clicking here.)

First, we are implementing flash into the simplicity of our cluster-wide storage pool so that the tasks of deploying a cluster, a cluster node, or creating a VM are just as simple as always. The real difference you will notice will be the performance improvement. You will see the benefits of our flash storage even if you didn’t know it was there. Our storage architecture already provided the benefit of direct block access to physical storage from each VM without inefficient protocol and our flash implementation uses this same architecture.

Second, we are not implementing flash storage as a cache like other solutions.  Many solutions required flash as a storage cache to make up for the deficiencies of their inefficient storage architectures and I/O pathing. With HC3, flash is implemented as a storage tier within the storage pool and adds to the overall storage capacity. We created our own enhanced, automated tiering technology to manage the data across both SSD and HDD tiers to retain the simplicity of the storage pool with the high performance of flash for the hottest blocks.

Finally, we are implementing flash with the most affordable high performing SSD hardware we can find in our already affordable HC3 cluster nodes. Our focus on the SMB market makes us hypersensitive to the budget needs of small and midsize datacenters and it is our commitment to provide the best products possible for your budgets. This focus on SMB is why we are not just slapping together solutions from multiple vendors into a chassis and calling it hyperconvergence but instead we have developed our own operating system, our own storage system, and our own management interface because small datacenters deserve solutions designed specifically for their needs.

Hopefully, I have helped you understand just how we are able to announce our HC1150 cluster starting at $24,500* for 3 nodes, delivering world class hyperconvergence with the simplicity of single server management and the high performance of hybrid flash storage. It wasn’t easy but we believe in doing it the right way for SMB.

Click here for the official press release.

*After discounts from qualified partners.

Disaster Recovery Made Easy… as a Service!

You probably already know about the built-in VM-level replication in your HC3 cluster, and you may have already weighed some options on deploying a cluster for disaster recovery (DR). It is my pleasure to announce a new option: ScaleCare Remote Recovery Service!

What is Remote Recovery Service and why should you care? Well, simply put, it is secure remote replication to a secure datacenter for failover and failback when you need it. You don’t need a co-lo, a second cluster, or to install software agents. You only need your HC3 cluster, some bandwidth, and the ability to create a VPN to use this service.

This service is being hosted in a secure SAEE-16 SOC 2 certified and PCI compliant datacenter and is available at a low monthly cost to protect your critical workloads from potential disaster. Once you have the proper VPN and bandwidth squared away, setting up replication could almost not be easier. You simply have to add in the network information for the remote HC3 cluster at LightBound and a few clicks later you are replicating.  HyperCore adds an additional layer of SSH encryption to secure your data across your VPN.

Screenshot 2016-04-22 11.55.58

I should also mention that you can customize your replication schedule with granularity ranging from every 5 minutes to every hour, day week, or even month. You can combine schedule rules to make it as simple or complex as you need to meet your SLAs. Choose RPO of 5 minutes and failover within minutes if you need it or any other model that meets your needs. Not only are you replicating the VM but all the snapshots so you have all your point-in-time recovery options after failover. Did I mention you will get a complete DR runbook to help plan your entire DR process?

We know DR is important to you and your customers both internal and external. In fact, it could be the difference between the life and death of your business or organization. Keep your workloads protected with a service that is designed to specifically for HC3 customers and HC3 workloads.

Remote Recovery Service is not free but it starts as low as $100/month per VM. Contact Scale to find out how you can fit DR into your budget without having to build out and manage your own DR site.