Tag Archives: HCI

5 Reasons to Refresh IT Infrastructure

Nothing lasts forever, especially IT infrastructure technology. In the ever-evolving world of IT hardware and software, change is inevitable. Right now, organizations all over the world are seeing the signs they need to refresh, whether they know it or not. End of Life or End of Support are obvious reasons, but what other reasons are being realized or possibly ignored?

#1 – Performance becomes a pain.

IT is an engine that drives business. Over time, increased load and decreased efficiency take their toll on the performance of the IT engine. Performance issues can be dealt with in a number of ways such seeking out and fixing specific hardware performance bottlenecks to improving processes for greater efficiency. As with an automobile, there is a breaking point where the cost of fixing the problems outweighs the cost of repair. Performance issues can be notoriously hard to diagnose without expertise. For that reason, the pain of performance issues often demands IT infrastructure refresh.

#2 – Datacenters need consolidation.

For a variety of reasons, sometimes datacenters need consolidation. The needs may include acquisition, restructuring, or relocating but it may not make sense to consolidate what exists. Often different sites and even different departments use different infrastructure technologies for no particular reason. While it is possible to coexist a number of different SAN/NAS devices and hypervisor architectures, it adds a load of complexity to the combined datacenter. This is a classic opportunity for infrastructure refresh to eliminate complexity

#3 – Capacity is limited.

When you bought your SAN you had space for extra shelves for growth but you filled that faster than expected. Capacity planning isn’t an exact science, not even close. You could add another storage device and cobble the two together but there might be a better way in scale-out architecture.  Software-defined scale-out storage built into hyperconverged infrastructure solutions offers very easy and cost-effective scaling, on-demand. Refreshing with hyperconvergence is not only going to help scale storage effectively but also scale RAM and CPU as needed, in a much simpler solution.

#4 – Alice doesn’t work here anymore.

Remember Alice? She was the out-sourced IT consultant who designed and built the current infrastructure with her own unique set of expertise? Well, her fees for continuing to fix and manage the solution whenever there was an issue was just too expensive. No one else can quite figure out how anything is supposed to work because she expertly put together some older gear that isn’t even supported any more. It works for now, thanks to Alice, but if something goes wrong, then what? It could be time to start over and refresh the IT infrastructure with something newer that staff IT administrators can deal with.

#5 – Costs are examined.

IT is a cost center. The challenge is to get the most benefit from that cost. Older technologies like SANs, whether physical or virtual, are costly and were never designed for virtualization in the first place. Hypervisors and management solutions like VMware vSphere come at a high cost in ongoing software licensing fees. There is the cost of integrating all of the storage, servers, virtualization and management software, not to mention backup/DR. Then finally the cost of expertise to manage and maintain these systems. The costs of this traditional virtualization architecture can be overwhelming. Avoiding these high costs is a primary reason IT professionals are looking at technologies like cloud and hyperconvergence to simplify IT.

Whatever the reason for an IT infrastructure refresh, it is an opportunity to lower costs, increase productivity, and plan for future growth. This is why so many are considering hyperconverged infrastructures like HC3 from Scale Computing. It dramatically simplifies IT infrastructure, lowers costs, and allows seamless scaling for future growth. While some workloads may be destined for the cloud, the HC3 virtualization platform provides a simple, secure, and highly available solution for all of your on-prem datacenter needs. If it is time to refresh, take a look at everything HC3 provides and all the complexity it eliminates.

Why HC3 IT Infrastructure Might Not Be For You

Scale Computing makes HC3 hyperconverged infrastructure appliances and clusters for IT organizations around the world with a focus on simplicity, scalability, and availability. But HC3 IT infrastructure solution might not be for you for a few reasons.

  • You want to be indispensable for your proprietary knowledge.

You want to be the only person who truly understands your IT Infrastructure. Having designed your infrastructure personally and managing it with your own home-grown scripts, only you have the knowledge and expertise to keep it running. Without you, your IT department is doomed to fail.

HC3 is probably not for you. HC3 was designed to be so simple to use that it can be managed by even a novice IT administrator. HC3 would not allow you to control the infrastructure with proprietary design and secret knowledge that only you could possess. Of course, if you did go with HC3, you’d be a pioneer of new technology who would be an ideal asset for any forward thinking IT department.

  • You are defined by your aging certifications.

You worked hard and paid good money to get certifications in storage systems, virtualization hypervisors, server hardware, and even disaster recovery systems that are still around. You continue to use these same old technologies because you are certified in them, and that gives you leverage for higher salary. Newer technologies hold less interest because they wouldn’t allow you to take advantage of your existing certifications.

HC3 is probably not for you. HC3 is based on new infrastructure architecture that doesn’t require any expensive certifications. Any IT administrator can use HC3 because it was designed to remove reliance on legacy technologies that were too complex and required excessive expertise. HC3 won’t allow you to leverage your certifications in these legacy technologies. Of course, with all of the management time you’d save using HC3, you’d be able to learn new technologies and expand your skills beyond infrastructure.

  • You like going to VMworld every year.

You’ve been using VMware and going to VMworld since 2006 and it is a highlight of your year. You always enjoy reuniting with VMworld regulars and getting out of the office. It isn’t as useful as it was earlier on but you still attend a few sessions along with all of the awesome parties. Life just wouldn’t be the same without attending VMworld.

HC3 is probably not for you. HC3 uses a built-in hypervisor, alleviating the need for VMware software and VMware software licensing. Without VMware, you probably won’t be able to justify your trip to VMworld as a business expense. Of course, with all the money you will likely save going with HC3, your budget might be open to going to even more conferences to help you develop new skills and services to help your business grow even faster.

  • You prefer working late nights and weekends.

The office and better yet, the data center, are a safe place for you. Whether you don’t have the best home life or you prefer to avoid awkward social events, you find that working late nights and weekends doing system updates and maintenance a welcome prospect. We get it. Real life can be hard. Solitude along with the humming of fans and spinning disks offers an escape from the real world.

HC3 is probably not for you. HC3 is built to eliminate the need to take systems offline for updates and maintenance tasks so these can be done at any time, including during normal business hours. HC3 doesn’t leave many infrastructure tasks that need to be done late at night or on weekends. Of course, if you did go with HC3, you’d probably have more time to and energy to sort out your personal life and make your home and your social life more to your liking.

Summary

HC3 may not be for everyone. When change is difficult to embrace, many choose to stick with the way it has always been done. For others, however, emerging technologies like HC3 are a way to constantly evolve with architecture that lowers costs with simplicity, scalability, and availability for modern IT.

Backup is No Joke

Today is World Backup Day and a reminder to everyone about how important it is to backup your data. Why today? What better day than before April Fools Day to remember to be prepared for anything. You don’t want to be the fool who didn’t have a solid backup plan.

But what is a backup? Backing up business critical data is more complex than many people realize which may be why backup and disaster recovery plans fall apart in the hour of need. Let’s start with the basic definition: A backup is a second copy of your data you keep in case your primary data is lost or corrupted. Pretty simple. Unfortunately, that basic concept is not nearly enough to implement an effective backup strategy.  You need some additional considerations.

  1. Location – Where is your backup data stored? Is it on the same physical machine as your primary data? Is it in the same building? The closer your backup is to the primary data, the more chance your backup will suffer the same fate as your primary data. The best option is to have your backup offsite, physically removed from localized events that might cause data loss.
  2. Recovery Point Objective – If you needed to recover from your backup, how much recent data would you lose? Was your last backup taken an hour ago, a day ago, or a week ago? How much potential revenue could be lost along with the data you can’t recover? Taking backups as frequently as possible is the best way to prevent data loss.
  3. Recovery Time Objective – How long will it take to recover your data? If you are taking backups every hour but it takes you several hours or longer to recover from a backup, was the hourly backup effective? Recovery time is as important as recovery point. Have a plan for rapid recovery.
  4. System Backup – For a long time, backups only captured user and application data. Recovery was painful because the OS and applications needed to be rebuilt before restoring the data. These days, entire servers are usually what is backed up, increasing recovery speed.
  5. Multiple Points in Time – Early on, many learned the hard way that keeping one backup is not enough. Multiple backups from different points in time were required for a number of reasons. Sometimes backups failed, sometimes data needed to be recovered from further back in time, and for some businesses, backups need to be kept for years for compliance. The more backups, the more points in time that data can be recovered from.
  6. Backup Storage – One of the greatest challenges to backup over the decades has been storage. Keeping multiple copies of your data quickly starts consuming multiples of storage space. It just isn’t economical to require 10x or more of the storage of your primary data for backup. Incremental backups, compression, and deduplication have helped but backups still take lots of space. Calculating the storage requirements for your backup needs is essential.

Are snapshots backups? Sort of, but not really. Snapshots do provide recovery capabilities within a local system, but generally go down with the ship in any kind of real disaster. That being said, many backup solutions are designed around snapshots and use snapshots to create a real backup by copying the snapshot to an offsite location. These replicated snapshots are indeed backups that can be used for recovery just like any other form of backup.

Over the decades, there have been a variety of hardware, software, and service-based solutions to tackle backup and recovery. Within the last decade, there has been an increasing movement to include backup and recovery capabilities within operating systems, virtualization solutions, and storage solutions. This movement of turning backup into a feature rather than a secondary solution has only been gaining momentum.

With the hyperconvergence movement, where virtualization, servers, storage, and management are brought together into a single appliance-based solution, backup and disaster recovery are being included as well. Vendors like Scale Computing are providing all of the backup and disaster recovery capabilities you need. Scale Computing even offers their own cloud-based DRaaS as an option.

So today, on the eve of April Fools Day, let’s remember that backup is no joke. Businesses rely on data and it is our job as IT professionals to protect against the loss of that data with backup. Take some time to review your backup plans and find out if you need to be doing more to prevent the next data loss event lurking around the corner.

Is Hyperconvergence Here to Stay?

As virtualization continues to evolve into hyperconvergence, cloud, and other technologies, both IT professionals and analysts are looking into the future to predict the velocity, direction, and longevity of these technologies. Business leaders want their spending decisions to be based on the latest intel and technologies that will carry their business into the future.

Analysts George J. Weiss and Andrew Butler from Gartner have put together their predictions on the future of hyperconvergence in this recent report: Prepare for the Next Phase of Hyperconvergence

Scale Computing has been a leader in hyperconvergence, helping define the current evolution of hyperconvergence with innovative storage architecture and unsurpassed simplicity.  Whatever the future holds for the evolution of hyperconvergence, Scale Computing plans to be at the forefront of hyperconvergence innovation.  We expect to continue delivering the simplicity, scalability, and availability our customers have come to expect from hyperconvergence.

Is Hyperconvergence the Private Cloud You Need?

If you are an IT professional, you are most likely familiar with at least the term “hyperconvergence” or “hyperconverged infrastructure”. You are also undoubtedly aware of cloud technology and some of the options for public, private, and hybrid cloud.  Still, this discussion merits a brief review of private cloud before delving into how hyperconvergence fits into the picture.

What is a Private Cloud?

The basic premise behind cloud technology is an abstraction of the management of VMs from the underlying hardware infrastructure. In a public cloud, the infrastructure is owned and hosted by someone else, making it completely transparent. In a private cloud, you own the infrastructure and still need to manage it, but the cloud management layer simplifies day-to-day operation of VMs compared to traditional virtualization.

Traditional virtualization is complicated by managing hypervisors running on individual virtual hosts and managing storage across hosts. When managing a single virtual host, VM creation and management is fairly simple. In a private cloud, you still have that underlying infrastructure of multiple hosts, hypervisors, and storage, but the cloud layer provides the same simple management experience of a single host but spread across the whole data center infrastructure.

Many organizations who are thinking of implementing private cloud are also thinking of implementing public cloud, creating a hybrid cloud consisting of both public and privately hosted resources.  Public cloud offers added benefits for pay-per-use elasticity for seasonal business demands and cloud-based applications for productivity.

Why Not Put Everything in Public Cloud?

Many organizations have sensitive data that they prefer to keep onsite or are required to do so by regulation. Maintaining data onsite can provide greater control and security than keeping it in the hands of a third party. For these organizations, private cloud is preferable to public cloud.

Some organizations require continuous data access for business operations and prefer not to risk interruption due to internet connectivity issues. Maintaining systems and data onsite allows these organizations to have more control over their business operations and maintain productivity. For these organizations, private cloud is preferable to public cloud.

Some organizations prefer the Capex model of private cloud vs. the Opex model of public cloud.  When done well, owning and managing infrastructure can be less expensive than paying someone else for hosting. The costs can be more predictable for onsite implementation, making it easier to budget. Private cloud is preferable for these organizations.

How does Hyperconvergence Fit as a Private Cloud?

For all intents and purposes, hyperconverged infrastructure (HCI) offers the same or better experience as a traditional private cloud. You could even go so far as to say it is the next generation of private cloud because it improves on some of the shortcomings of traditional private clouds. The simplicity of managing VMs in HCI is the same as traditional private clouds and brings an even simpler approach to managing the underlying hardware.

HCI is a way of combining the elements of traditional virtualization (servers, storage, and hypervisor) into a single appliance-based solution. With traditional virtualization, you were tasked with integrating these elements from multiple vendors into to working infrastructure and dealing with any incompatibilities and managing with multiple console, etc. HCI is a virtualization solution that has all of these elements pre-integrated into more or less a turnkey appliance. There should be no need to configure any storage, configure any hypervisor installs on host servers, or manage through more than a single interface.

Not all HCI vendors are equal and some rely on third party hypervisors so there are still elements of multi-vendor management, but true HCI solutions own the whole hardware and virtualization stack, providing the same experience as a private cloud. Users are able to focus on creating and managing VMs rather than worrying about the underlying infrastructure.

With the appliance-based approached, hyperconvergence is even easier to scale out than traditional private clouds or even the cloud-in-a-box solutions that also provide some levels of pre-integration. HCI scalability should be as easy as plugging in a new appliance node to a network and telling it to join an existing HCI cluster of appliance nodes.

HCI is generally more accessible and affordable than traditional private clouds or cloud-in-a-box solutions because it can start and then scale out from very small implementations without any added complexity. Small to midmarket organizations who experienced sticker shock at the acquisition and implementation costs of private clouds will likely find the costs and cost benefits of HCI much more appealing.  

Summary

Private cloud is a great idea for any organization whose goals include the control and security of onsite infrastructure and simplicity of day-to-day VM management. These organizations should be looking to hyperconverged infrastructure as a private cloud option to achieve those goals vs traditional private cloud or cloud-in-a-box options.

Behind the Scenes: Architecting HC3

Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

The Whole Enchilada

With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.

Storage

At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.

Hypervisor

Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.

Backup/DR

Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.

Management

By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.

Summary

Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

How Important is DR Planning?

Disaster Recovery (DR) is a crucial part of IT architecture but it is often misunderstood, clumsily deployed, and then neglected. It is often unclear whether the implemented DR tools and plan will actually meet SLAs when needed. Unfortunately it often isn’t until a disaster has occurred that an organization realizes that their DR strategy has failed them. Even when organizations are able to successfully muddle through a disaster event, they often discover they never planned for failback to their primary datacenter environment.

Proper planning can ensure success and eliminate uncertainty, beginning before implementation and then enabling continued testing and validation of the DR strategy, all the way through disaster events. Planning DR involves much more than just identifying workloads to protect and defining backup schedules. A good DR strategy include tasks such as capacity planning, identifying workload dependencies, defining workload protection methodology and prioritization, defining recovery runbooks, planning user connectivity, defining testing methodologies and testing schedules, and defining a failback plan.

At Scale Computing, we take DR seriously and build in DR capabilities such as backup, replication, failover, and failback to our HC3 hyperconverged infrastructure.  In addition to providing the tools you need in our solution, we also offer our DR Planning Service to help you be completely successful in planning, implementing, and maintaining your DR strategy.

Our DR Planning Service, performed by our expert ScaleCare support engineers, provides a complete disaster recovery run-book as an end-to-end DR plan for your business needs. Whether you have already decided to implement DR to your own DR site or utilize our ScareCare Remote Recovery Service in our hosted datacenter, our engineers can help you with all aspects of the DR strategy.

The service also includes the following components:

  • Setup and configuration of clusters for replication
  • Completion of Disaster Recovery Run-Book (disaster recovery plan)
  • Best-practice review
  • Failover and failback demonstration
  • Assistance in facilitating a DR test

You can view a recording of our recent webinar on DR planning here.

Please let us know how we can help you with DR planning on your HC3 system by contacting ScaleCare support at 877-SCALE-59 or support@scalecomputing.com.

HC3 VM File Level Recovery with Video

Many of you have asked us recently about individual file recovery with HC3 and we’ve put together some great resources on how it works. We realize file recovery is an important part of IT operations. It is often referred to as operational recovery instead of disaster recovery, because the loss of a single file is not necessarily a disaster. It is an important part of IT and an important function we are able to highlight with HC3.

First off, we have a great video demo by our Pontiff of Product Management, Craig Theriac.

Additionally, we have a comprehensive guide for performing file level recovery on HC3 from our expert ScaleCare support team. This document, titled “Windows Recovery ISO”, explains every detail of the process from beginning to end. To summarize briefly, the process involves using a recovery ISO to recover files from a VM clone taken from a known good snapshot. As you can see in the video above, the process can be done very quickly, in just a matter of minutes.

(Click here for full document.)

Full disclosure: We know you’d prefer to have a more integrated process that is built into HC3, and we will certainly be working to improve this functionality with that in mind. Still, I think our team has done a great job providing these new resources and I think you’ll find them very helpful in using HC3 to its fullest capacity. Happy Scaling!

New! – Premium Installation Service

2017 is here. We want to help you start your new year and your new HC3 system with our new ScaleCare Premium Installation service. You’ve probably already heard about how easy HC3 is to install and manage, and you might be asking why you would even need this service. The truth is that you want your install to go seamlessly and to have full working knowledge of your HC3 system right out of the gate, and that is what this service is all about.

First, this premium installation service assists you with every aspect of installation starting with planning, prerequisites, virtual and physical networking configuration, and priority scheduling. You get help even before you unbox your HC3 system to prepare for a worry-free install. The priority scheduling helps you plan your install around your own schedule, which we know can be both busy and complex.

Secondly, ScaleCare Premium Installation includes remote installation with a ScaleCare Technical Support Engineer. This remote install includes a UI overview and setup assistance and if applicable, a walkthrough of HC3 Move software for workload migrations to HC3 of any physical or virtual servers. Remote installation means a ScaleCare engineer is with you every step of the way as you install and configure your HC3 system.

Finally, ScaleCare Premium Installation includes deep dive training of everything HC3 with a dedicated ScaleCare Technical Support Engineer. This training, which normally takes around 4 hours to complete, will make you an HC3 expert on everything from virtualization, networking, backup/DR, to our patented SCRIBE storage system. You’ll basically be a PHD of HC3 by the time you are done with the install.

Here is the list of everything included:

  • Requirements and Planning Pre-Installation Call
  • Virtual and Physical Networking Planning and Deployment Assistance
  • Priority Scheduling for Installations
  • Remote Installation with a ScaleCare Technical Support Engineer
  • UI Overview and Setup Assistance
  • Walkthrough of HC3 Move software for migrations to HC3 of a Windows physical or virtual server
  • Training with a dedicated ScaleCare Technical Support Engineer
    • HC3 and Scribe Overview
    • HC3 Configuration Deep Dive
    • Virtualization Best Practices
    • Networking Best Practices
    • Backup / DR Best Practices

Yes, it is still just as easy to use and simple to deploy as ever, but giving yourself a head start in mastering this technology seems like a no-brainer. To find out more about how to get ScaleCare Premium Installation added to your HC3 order, contact your Scale Computing representative. We look forward to providing you with this service!

Scale Computing – A Year in Review 2016

It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.

“And the award goes to…”

Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.

Scenes from the 2016 Midsize Enterprise Summit

 

News Flash!

2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage  has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.

Newer, Stronger, Faster

When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.

Going Solo

In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.

 

Cloud-based DR? Check

2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.

Better Together

2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.

The Doctor is In

It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017.  Keep checking our blog for my latest posts.  

Just me, Dr. P

 

Sincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.

Happy New Year!