Tag Archives: scale computing

Scale Computing – A Year in Review 2016

It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.

“And the award goes to…”

Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.

Scenes from the 2016 Midsize Enterprise Summit

 

News Flash!

2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage  has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.

Newer, Stronger, Faster

When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.

Going Solo

In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.

 

Cloud-based DR? Check

2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.

Better Together

2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.

The Doctor is In

It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017.  Keep checking our blog for my latest posts.  

Just me, Dr. P

 

Sincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.

Happy New Year!

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

Scale Computing Keeps Storage Simple and Efficient

Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.

At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex.  In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?

Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.

Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs.  These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.

VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads.  To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.

The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability.  This pooling also allows for complete flexibility of storage usage across all nodes.  The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.

To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O.  By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.

We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest.  We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.

Screenshot 2016-04-19 13.07.06

The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.

Screenshot 2016-07-13 09.34.07

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

Video: How to Add Resources to HC3

With an infrastructure refresh on the horizon, a common question asked in IT used to be:

“What should I buy today that will meet my storage demand over the next X years?”

Historically, that is because IT groups needed to purchase today what they would need 3-5 years from now in order to push out a painful forklift upgrade that would inevitably come with reaching max capacity in a monolithic storage array.  After the introduction of “scale-out” storage (where you were no longer locked into the capacity limitations of a single physical storage array), the question then became:

“What should I buy today that will meet grow alongside my storage demand over the next X years?”

This meant that customers could buy what they needed for storage today knowing that they could add to their environment to scale-out the storage capacity and performance down the road.  There were no forklift upgrades or data migrations to deal with.  Instead, it offered the seamless scaling of storage resources to match the needs of the business.

Now with hyperconverged solutions like HC3 where the scale-out architecture allows users to easily add nodes to infrastructure to scale out both the compute and storage, the question has changed yet again.  Hyperconverged customers now ask themselves:

“What should I buy today that will meet grow alongside my storage infrastructure demand over the next X years?”

Adding nodes to HC3 is simple.  After racking and plugging in power/networking, users simply assign an IP address and initialize the node.  HyperCore (HC3’s ultra-easy software) then takes over from there seamlessly aggregating the resources of that node in with the rest of the HC3 cluster.  There is no disruption to the running VMs.  In fact, the newly added spindles are immediately available to the running VMs giving an immediate performance boost with each node added to the cluster.  Check out the demo below to see HC3’s scalability in action!

 

 

SMB IT Challenges

There was a recent article that focused on the benefits that city, state and local governments have gained from implementing HyperConvergence (Side Note: for anyone interested in joining, it was brought to my attention on a new HyperConvergence group on LinkedIn where such articles are being posted and discussed).  The benefits cited in the article were:

  • Ease of management,
  • Fault tolerance,
  • Redundancy, and late in the article…
  • Scalability.

I’m sure it isn’t surprising given our core messaging around Scale’s HC3 (Simplicity, High Availability and Scalability), but I agree wholeheartedly with the assessment.

It occurred to me that the writer literally could have picked any industry and the same story could have been told.  When the IT Director from Cochise County, AZ says:

“I’ve seen an uptick in hardware failures that are directly related to our aging servers”,

It could just as easily have been the Director of IT at the manufacturing company down the street.  Or when the City of Brighton, Colorado’s Assistant Director of IT is quoted as saying,

“The demand (for storage and compute resources) kept growing and IT had to grow along with it”,

That could have come out of the mouth of just about any of the customers I talk to each week. Continue reading

The Next-Generation Server Room

There was a recent article on Network Computing regarding the Next Generation Data Center that got me thinking about our SMB target customer and the next generation server room.  Both the enterprise and the SMB face the influx of traffic growth described in the article (clearly at different levels, but an influx none-the-less).  So how will the SMB cope?  How will an IT organization with the limited resources of time and money react?  By focusing on Simplicity in the infrastructure.

Elimination of Legacy Storage Protocols through Hypervisor Convergence

There is an ongoing trend to virtualize workloads in the SMB that traditionally meant adding a SAN or a NAS to provide shared storage for high availability.  With the introduction of Hypervisor Converged architectures through products like Scale’s HC3, that requirement no longer exists.  In this model, end users can take advantage of the benefits of high availability without the complexity that comes with legacy storage protocols like iSCSI or NFS.  Not only does this reduce the management overhead of the shared storage, it also simplifies the vendor support model dramatically.  In the event of an issue, a single vendor can be called for support with no ability to place the blame on another component in the stack.

Simplicity in Scaling

Moore’s Law continues to hold as better, faster and cheaper equipment becomes available year after year.  By implementing a scale-out architecture in the infrastructure, IT organizations can take advantage of this by purchasing what they need today knowing that they can purchase equipment at tomorrow’s prices to scale-out the resources when the need arises.  Combined with the ability to mix and match hardware types in a hypervisor converged model also means that users have granularity in their scaling to match the requirements of the workloads at that time (such as adding a storage-only node in HC3 to a compute cluster to scale out only the storage resources). Continue reading

Virtualizing Microsoft Exchange on HC3

Virtualizing Microsoft Exchange is one of the primary use cases that we see for HC3 customers. A general move to virtualizing Exchange has gained traction as companies take the normal cycle of hardware refreshes and Operating System upgrades as an opportunity to consolidate servers in a virtualized environment.  These companies seek to take advantage of:

  • Better availability;
  • Flexibility in managing unplanned growth (both performance and capacity); and
  • Lower costs from better hardware utilization. Continue reading

A Move from VMware to HC3

Many of Scale’s HC3 customers are coming to us from a traditional Do-It-Yourself virtualization environment where they combined piecemeal parts including VMware’s hypervisor to create a complex solution that provides the high availability expected in their infrastructure.  Fed up with the complexity (or more often the vTax on a licensing renewal) associated with that setup, they eventually find HC3 as a solution to provide the simplicity, scalability and high availability needed at an affordable price.

I just returned from the Midmarket CIO Forum last week where 98% of the CIOs I spoke to had implemented some form of the VMware environment described above (the other 2% were Hyper-V, but the story of vTax still rang true!).  We met with 7 boardrooms full of CIOs who all reacted the same to the demo of HC3: “This sounds too good to be true!”  To which I like to reply, “Yeah, we get that a lot.” 🙂

After the initial shock of seeing HC3 for the first time, pragmatism inevitably takes over.  The questions then became, “How do I migrate from VMware to HC3?” or “How can I use HC3 alongside my existing VMware environment?”   I spent the majority of my week talking through the transition strategies we have seen from some of the 600+ HC3 customers when migrating from VMware to HC3 VMs (V2V process). Continue reading

Five Business Reasons Why Developers and Software Ecosystems Benefit from KVM

By: Peter Fuller, Vice President of Business Development and Alliances, Scale Computing

As the VP of Business Development and Alliances for Open Virtual Alliance Member Scale Computing, I work with a diverse group of top players in the software ecosystem. While many have KVM compatible products as full virtual appliances, others are building business cases to justify the minor engineering expense required to develop KVM-compatible versions of their VMware, Citrix or Hyper-V solutions.

This KVM question has isochronously emerged as a discussion point with my business development peers this year. It is not a hard apologetic to form since KVM support is: 1) adopted, 2) supported and crowd sourced, 3) independent, 4) a quickly profitable engineering exercise and 5) freely available.

Let’s take a quick look at the benefits:

(1) KVM is Adopted & Mature

KVM (Kernel-based Virtual Machine) works in the Linux kernel as an open source, free component for Linux on x86 hardware that contains Intel VT or AMD-V extensions. With KVM, multiple unmodified Linux or Windows images can run as virtual machines on a single processor.

KVM is growing at 60% year over year in terms of new server shipments virtualized, with over 100,000 shipments and nonpaid deployments worldwide over the past 12 quarters.1 The worldwide virtual-machine software market was on track to grow to over $3.6 billion in 2012, up from $3.0 billion the year before, a 19.3% year-over-year growth.2

KVM is also the standard for OpenStack. In fact, 71% of OpenStack deployments use KVM.

The technology is also very mature. According to CloudPro, KVM held the top 7 SPECVirt benchmarks, outperforming VMware across 2, 4 and 8 socket servers. As CloudPro mentions, it is very rare that an open source solutions meets so many commercial specifications.3

(2) KVM is Supported & Crowd Sourced

Both IBM and Red Hat announced significant investments in KVM. Unlike VMware, the many results of those investments won’t be locked behind intellectual property laws. The companies are contributing much of its KVM development to the open source community.

This investment was important for Scale, not because we use Red Hat branches of KVM, but because it will undoubtedly attract publishers into the technology and legitimized it as an enterprise-class hypervisor.

The growing ecosystem of KVM supporters is proof. The OVA has over 300 members of software ad hardware vendors, and continues to add to its ranks daily. This collective pool of companies contributes code back to the community, allowing each company indirect access to each other’s open development initiatives. Hundreds of thousands of non-member Linux developers also add to the crowd-sourced technologies that companies like Scale can use. Additionally, the Linux Foundation recently announced that the OVA would become an official collaborative project.

Ecosystem developers benefit from this crowd-sourced adoption of KVM in ways they can’t leverage with commercial solutions like VMware. For starters, commercial virtualization solutions are

(3) KVM is Independent & Adaptive

The independence of KVM contributes to fecundity of its code. Hundreds of thousands of Linux developers around the world develop technologies for Linux and KVM—without restrictions associated with corporate IP protection.

While the permanency of any company is in continual state of ambiguity, corporations are far more labile than un-owned open source code. KVM will be around forever; there’s little risk supporting it.

The biggest challenges to the viability of some hypervisor providers are the open source headwinds wreaking havoc on their financial models. Specialized vendors like VMware don’t have the product diversity outside of their hypervisor that cushion companies like Microsoft and Citrix. As the hypervisor becomes a commodity, revenues are made on the management tools and licensed annually. This stress already pushed VMware to compete with its partners. Just this year, the company released a V-SAN product in direct completion to Nutanix and Simplivity.

(4) KVM is Easily Convertible & Supporting it is Profitable

I like to use a basic supply and demand argument support KVM development: while there’s an infinite supply of a vendor’s code, there will always be a finite supply of a customer’s cash.

To save that finite cash pool, roughly 70 percent of corporations use KVM as a secondary hypervisor to avoid licensing costs for non-production virtual machines. This install base represents a huge market that is quickly migrating KVM to the primary position in order to reduce recurring licensing costs.

Converting is Easy

In most cases, converting from a mainstream hypervisor to KVM is relatively simple. In fact, one of our alliance partners added KVM support to its robust backup software in just a week. The conversion from VMDK to QCOW2 (KVM) is fairly straightforward.

(5) The Hypervisor is a Commodity, Why Pay for It?

Hypervisors are a commodity. With Intel’s VT and AMD’s V chipset, KVM calls directly into the virtualization stack provided by those manufacturers at the chip level. There’s no need to pay license charges for solutions that use software to perform the virtualization tasks Intel and AMD provide in the hardware. A light kernel-based piece of code calling directly into the processor greatly increases the speed and efficiency of the virtualization experience. Additionally, since both Intel and AMD are committed to open technologies and the leverage publishers will get from these two companies is significant.

Conclusion

For ecosystem developers, the value extracted from the community translates into engineering efficiencies, faster feature development and flexibility, potentially millions of dollars in savings on engineering costs, and the ability to maintain price elasticity in a highly competitive ecosystem.

KVM has a large install base, major investors, commercial momentum and crowd-sourced development momentum. Spending a few weeks to add KVM support to existing applications will open new markets for developers while opening the door to new found capital efficiencies and faster development times.

______________

1IDC Worldwide Quarterly Server Virtualization Tracker, March 2013

2Worldwide Virtual Machine Software 2012-2016 Forecast, IDC #235379, June 2012

3 http://www.cloudpro.co.uk/iaas/virtualization/5278/kvm-should-it-be-ignored-hypervisor-alternative/page/0/1