×
×

Why HC3 IT Infrastructure Might Not Be For You

Scale Computing makes HC3 hyperconverged infrastructure appliances and clusters for IT organizations around the world with a focus on simplicity, scalability, and availability. But HC3 IT infrastructure solution might not be for you for a few reasons.

  • You want to be indispensable for your proprietary knowledge.

You want to be the only person who truly understands your IT Infrastructure. Having designed your infrastructure personally and managing it with your own home-grown scripts, only you have the knowledge and expertise to keep it running. Without you, your IT department is doomed to fail.

HC3 is probably not for you. HC3 was designed to be so simple to use that it can be managed by even a novice IT administrator. HC3 would not allow you to control the infrastructure with proprietary design and secret knowledge that only you could possess. Of course, if you did go with HC3, you’d be a pioneer of new technology who would be an ideal asset for any forward thinking IT department.

  • You are defined by your aging certifications.

You worked hard and paid good money to get certifications in storage systems, virtualization hypervisors, server hardware, and even disaster recovery systems that are still around. You continue to use these same old technologies because you are certified in them, and that gives you leverage for higher salary. Newer technologies hold less interest because they wouldn’t allow you to take advantage of your existing certifications.

HC3 is probably not for you. HC3 is based on new infrastructure architecture that doesn’t require any expensive certifications. Any IT administrator can use HC3 because it was designed to remove reliance on legacy technologies that were too complex and required excessive expertise. HC3 won’t allow you to leverage your certifications in these legacy technologies. Of course, with all of the management time you’d save using HC3, you’d be able to learn new technologies and expand your skills beyond infrastructure.

  • You like going to VMworld every year.

You’ve been using VMware and going to VMworld since 2006 and it is a highlight of your year. You always enjoy reuniting with VMworld regulars and getting out of the office. It isn’t as useful as it was earlier on but you still attend a few sessions along with all of the awesome parties. Life just wouldn’t be the same without attending VMworld.

HC3 is probably not for you. HC3 uses a built-in hypervisor, alleviating the need for VMware software and VMware software licensing. Without VMware, you probably won’t be able to justify your trip to VMworld as a business expense. Of course, with all the money you will likely save going with HC3, your budget might be open to going to even more conferences to help you develop new skills and services to help your business grow even faster.

  • You prefer working late nights and weekends.

The office and better yet, the data center, are a safe place for you. Whether you don’t have the best home life or you prefer to avoid awkward social events, you find that working late nights and weekends doing system updates and maintenance a welcome prospect. We get it. Real life can be hard. Solitude along with the humming of fans and spinning disks offers an escape from the real world.

HC3 is probably not for you. HC3 is built to eliminate the need to take systems offline for updates and maintenance tasks so these can be done at any time, including during normal business hours. HC3 doesn’t leave many infrastructure tasks that need to be done late at night or on weekends. Of course, if you did go with HC3, you’d probably have more time to and energy to sort out your personal life and make your home and your social life more to your liking.

Summary

HC3 may not be for everyone. When change is difficult to embrace, many choose to stick with the way it has always been done. For others, however, emerging technologies like HC3 are a way to constantly evolve with architecture that lowers costs with simplicity, scalability, and availability for modern IT.

Backup is No Joke

Today is World Backup Day and a reminder to everyone about how important it is to backup your data. Why today? What better day than before April Fools Day to remember to be prepared for anything. You don’t want to be the fool who didn’t have a solid backup plan.

But what is a backup? Backing up business critical data is more complex than many people realize which may be why backup and disaster recovery plans fall apart in the hour of need. Let’s start with the basic definition: A backup is a second copy of your data you keep in case your primary data is lost or corrupted. Pretty simple. Unfortunately, that basic concept is not nearly enough to implement an effective backup strategy.  You need some additional considerations.

  1. Location – Where is your backup data stored? Is it on the same physical machine as your primary data? Is it in the same building? The closer your backup is to the primary data, the more chance your backup will suffer the same fate as your primary data. The best option is to have your backup offsite, physically removed from localized events that might cause data loss.
  2. Recovery Point Objective – If you needed to recover from your backup, how much recent data would you lose? Was your last backup taken an hour ago, a day ago, or a week ago? How much potential revenue could be lost along with the data you can’t recover? Taking backups as frequently as possible is the best way to prevent data loss.
  3. Recovery Time Objective – How long will it take to recover your data? If you are taking backups every hour but it takes you several hours or longer to recover from a backup, was the hourly backup effective? Recovery time is as important as recovery point. Have a plan for rapid recovery.
  4. System Backup – For a long time, backups only captured user and application data. Recovery was painful because the OS and applications needed to be rebuilt before restoring the data. These days, entire servers are usually what is backed up, increasing recovery speed.
  5. Multiple Points in Time – Early on, many learned the hard way that keeping one backup is not enough. Multiple backups from different points in time were required for a number of reasons. Sometimes backups failed, sometimes data needed to be recovered from further back in time, and for some businesses, backups need to be kept for years for compliance. The more backups, the more points in time that data can be recovered from.
  6. Backup Storage – One of the greatest challenges to backup over the decades has been storage. Keeping multiple copies of your data quickly starts consuming multiples of storage space. It just isn’t economical to require 10x or more of the storage of your primary data for backup. Incremental backups, compression, and deduplication have helped but backups still take lots of space. Calculating the storage requirements for your backup needs is essential.

Are snapshots backups? Sort of, but not really. Snapshots do provide recovery capabilities within a local system, but generally go down with the ship in any kind of real disaster. That being said, many backup solutions are designed around snapshots and use snapshots to create a real backup by copying the snapshot to an offsite location. These replicated snapshots are indeed backups that can be used for recovery just like any other form of backup.

Over the decades, there have been a variety of hardware, software, and service-based solutions to tackle backup and recovery. Within the last decade, there has been an increasing movement to include backup and recovery capabilities within operating systems, virtualization solutions, and storage solutions. This movement of turning backup into a feature rather than a secondary solution has only been gaining momentum.

With the hyperconvergence movement, where virtualization, servers, storage, and management are brought together into a single appliance-based solution, backup and disaster recovery are being included as well. Vendors like Scale Computing are providing all of the backup and disaster recovery capabilities you need. Scale Computing even offers their own cloud-based DRaaS as an option.

So today, on the eve of April Fools Day, let’s remember that backup is no joke. Businesses rely on data and it is our job as IT professionals to protect against the loss of that data with backup. Take some time to review your backup plans and find out if you need to be doing more to prevent the next data loss event lurking around the corner.

Is Hyperconvergence Here to Stay?

As virtualization continues to evolve into hyperconvergence, cloud, and other technologies, both IT professionals and analysts are looking into the future to predict the velocity, direction, and longevity of these technologies. Business leaders want their spending decisions to be based on the latest intel and technologies that will carry their business into the future.

Analysts George J. Weiss and Andrew Butler from Gartner have put together their predictions on the future of hyperconvergence in this recent report: Prepare for the Next Phase of Hyperconvergence

Scale Computing has been a leader in hyperconvergence, helping define the current evolution of hyperconvergence with innovative storage architecture and unsurpassed simplicity.  Whatever the future holds for the evolution of hyperconvergence, Scale Computing plans to be at the forefront of hyperconvergence innovation.  We expect to continue delivering the simplicity, scalability, and availability our customers have come to expect from hyperconvergence.

One Customer’s Experience With Scale Computing

At Scale Computing, we do our best not only to build the best solutions for our customers, but also to explain why our solutions really are the best to those still deciding on a solution. In reality, no one can explain it as well as one of our actual customers.

This week we have the opportunity to share the Scale Computing experience of Nathan Beam of Bridgetree in his own words, on his own blog. Here is the link:

Simply Hyper-converged – An Overview of Scale Computing’s Easy-To-Use HC3 Virtualization Platform

Just to pull a quick quote: “My own experience and pretty much that of every other customer testifies to the fact that we all love our product. I searched long and hard trying to find unhappy owners of HC3 equipment… to this day I still don’t know if any exist.”

We look forward to sharing more of our user experiences with you in the future. If you are another HC3 user who wants to share your story here, contact me: dpaquette@scalecomputing.com. To see some of our other customer success stories, check out our case studies. For additional customer reviews, check out our page on the Spiceworks Community.

4 Lessons from the AWS Outage Last Week

The Amazon Web Services (AWS) Simple Storage Service (S3) experienced an outage on Tuesday last week and was down for several hours. S3 is object storage for around 150,000 websites and other services according to SimilarTech. For IT professionals, here are four takeaways from this outage.

#1 – It Happens

No infrastructure in immune to outages. No matter how big the provider, outages happen and downtime occurs. Whether you are hosting infrastructure yourself or relying on a third party, outages will happen eventually.  Putting your eggs in someone else’s basket does not necessarily buy you any more peace of mind. In this case, S3 was brought down by a simple typo from a single individual. That is as little as it takes to cause so much disruption. The premiums you pay to be hosted on a massive infrastructure like AWS will never prevent the inevitable failures, no matter how massive any platform becomes.

#2 – The Bigger They Are, the Harder They Fall

When a service is as massive as AWS, problems affect millions of users like customers trying to do businesses with companies using S3. Yes, outages do happen but do they have to take down so much of the internet with them when they do?  Like the DDOS attack I blogged about last fall, companies leave themselves open to these massive outages when they rely heavily on public cloud services. How much more confidence in your business would your customers have if they heard about a massive outage on the news but knew that your systems were unaffected?

#3 – It’s No Use Being an Armchair Quarterback

When an outage occurs with your third party provider, you call, you monitor, and you wait. You hear about what is happening and all you can do is shake your fist in the air knowing that you probably could have done better to either prevent the issue or resolve it more quickly if you were in control. But you aren’t in any position to do anything because you are reliant on the hoster. You have no option but to simply accept the outage and try to make up for the loss to your business. You gave up your ability to fix the problem when you gave that responsibility to someone else.

Just two weeks ago, I blogged about private cloud and why some organizations feel they can’t rely on hosted solutions because of any number of failures they would have no control over. If you need control of your solution to mitigate risk, you can’t also give that control to a third party.

#4 – Have a Plan

Cloud services are a part of IT these days and most companies are already doing some form of hybrid cloud with some services hosted locally and some hosted in the cloud. Cloud-based applications like Salesforce, Office365, and Google Docs have millions of users. It is inevitable that some of your services will be cloud-based, but they don’t all have to be. There are plenty of solutions like hyperconverged infrastructure to host many services locally with the simplicity of cloud infrastructure. When outages at cloud providers occur, make sure you have sufficient infrastructure in place locally so that you can do more than just be an armchair quarterback.

Summary

Public cloud services may be part of your playbook but they don’t have to be your endgame. Take control of your data center and have the ability to navigate your business through outages without being at the mercy of third party providers. Have a plan, have an infrastructure, and be ready for the next time the internet breaks.

When Cloud is Not what You Signed Up For

The AWS S3 outage on Tuesday confirmed the worst fears of many that bigger is not better. Three hours of outage for 150,000 or so websites and other services, because of some internal issues at S3. What we saw yet again yesterday was that a massive data center like S3 proved to be no more reliable than private data centers happily achieving five 9’s.

The real issue here is not that there was an outage. The outage was unfortunately just an inevitability that proves no infrastructure is invulnerable. No, the real issue is the perception that a cloud service like AWS can be made too big to fail. Instead, what we saw was that the bigger they are, the harder they fall.

Now, I like public cloud services and I use them often.  In fact, I used Google Docs to type a draft of this very blog post. However, would I trust my business critical data to public cloud? Probably not. Maybe I am old fashioned but I have had enough issues with outages of either internet services or cloud services to make me a believer in investing in private infrastructure.

The thing about public cloud is that it offers simplicity. Just login and manage VMs or applications without ever having to worry about a hard drive failure or a power supply going wonky. And that simplicity comes at a premium with the idea that you will save money by only using what you need without having to over-provision, like you would expect to do with buying your own gear. That seems like wishful thinking to me, because in my experience, managing costs with cloud computing can be a tricky business and it can be a full-time job to make sure you aren’t spending more than you intend.

Is the cost of managing private infrastructure even more? You must buy servers, storage, hypervisors, management solutions, and backup/DR, right? Not anymore. Hyperconverged infrastructure (HCI) is about delivering infrastructure that is pre-integrated and so easy to manage that the experience of using it is the same as using cloud. In fact, just last week I talked about how it really is a private cloud solution.

What is the benefit of owning your own infrastructure? First: Control. You get to control your fate with the ability to better plan for and respond to disaster and failure, mitigating risk to your level of satisfaction. No one wants to be sitting on their hands, waiting, while their cloud provider is supposedly working hard to fix the outage. Second, Cost. Costs are more predictable with HCI and there is less over-provisioning than with traditional virtualization solutions. There are also no ongoing monthly premium costs for the third party who is supposed to be eliminating the risk of downtime.

Cloud just isn’t the indestructible castle in the sky that we were meant to believe it was. Nothing is, but with HCI, you get your own castle and you get to rule it the way you see fit. You won’t be stuck waiting to see if all the king’s horses and all the king’s men can put Humpty back together again.

Is Hyperconvergence the Private Cloud You Need?

If you are an IT professional, you are most likely familiar with at least the term “hyperconvergence” or “hyperconverged infrastructure”. You are also undoubtedly aware of cloud technology and some of the options for public, private, and hybrid cloud.  Still, this discussion merits a brief review of private cloud before delving into how hyperconvergence fits into the picture.

What is a Private Cloud?

The basic premise behind cloud technology is an abstraction of the management of VMs from the underlying hardware infrastructure. In a public cloud, the infrastructure is owned and hosted by someone else, making it completely transparent. In a private cloud, you own the infrastructure and still need to manage it, but the cloud management layer simplifies day-to-day operation of VMs compared to traditional virtualization.

Traditional virtualization is complicated by managing hypervisors running on individual virtual hosts and managing storage across hosts. When managing a single virtual host, VM creation and management is fairly simple. In a private cloud, you still have that underlying infrastructure of multiple hosts, hypervisors, and storage, but the cloud layer provides the same simple management experience of a single host but spread across the whole data center infrastructure.

Many organizations who are thinking of implementing private cloud are also thinking of implementing public cloud, creating a hybrid cloud consisting of both public and privately hosted resources.  Public cloud offers added benefits for pay-per-use elasticity for seasonal business demands and cloud-based applications for productivity.

Why Not Put Everything in Public Cloud?

Many organizations have sensitive data that they prefer to keep onsite or are required to do so by regulation. Maintaining data onsite can provide greater control and security than keeping it in the hands of a third party. For these organizations, private cloud is preferable to public cloud.

Some organizations require continuous data access for business operations and prefer not to risk interruption due to internet connectivity issues. Maintaining systems and data onsite allows these organizations to have more control over their business operations and maintain productivity. For these organizations, private cloud is preferable to public cloud.

Some organizations prefer the Capex model of private cloud vs. the Opex model of public cloud.  When done well, owning and managing infrastructure can be less expensive than paying someone else for hosting. The costs can be more predictable for onsite implementation, making it easier to budget. Private cloud is preferable for these organizations.

How does Hyperconvergence Fit as a Private Cloud?

For all intents and purposes, hyperconverged infrastructure (HCI) offers the same or better experience as a traditional private cloud. You could even go so far as to say it is the next generation of private cloud because it improves on some of the shortcomings of traditional private clouds. The simplicity of managing VMs in HCI is the same as traditional private clouds and brings an even simpler approach to managing the underlying hardware.

HCI is a way of combining the elements of traditional virtualization (servers, storage, and hypervisor) into a single appliance-based solution. With traditional virtualization, you were tasked with integrating these elements from multiple vendors into to working infrastructure and dealing with any incompatibilities and managing with multiple console, etc. HCI is a virtualization solution that has all of these elements pre-integrated into more or less a turnkey appliance. There should be no need to configure any storage, configure any hypervisor installs on host servers, or manage through more than a single interface.

Not all HCI vendors are equal and some rely on third party hypervisors so there are still elements of multi-vendor management, but true HCI solutions own the whole hardware and virtualization stack, providing the same experience as a private cloud. Users are able to focus on creating and managing VMs rather than worrying about the underlying infrastructure.

With the appliance-based approached, hyperconvergence is even easier to scale out than traditional private clouds or even the cloud-in-a-box solutions that also provide some levels of pre-integration. HCI scalability should be as easy as plugging in a new appliance node to a network and telling it to join an existing HCI cluster of appliance nodes.

HCI is generally more accessible and affordable than traditional private clouds or cloud-in-a-box solutions because it can start and then scale out from very small implementations without any added complexity. Small to midmarket organizations who experienced sticker shock at the acquisition and implementation costs of private clouds will likely find the costs and cost benefits of HCI much more appealing.  

Summary

Private cloud is a great idea for any organization whose goals include the control and security of onsite infrastructure and simplicity of day-to-day VM management. These organizations should be looking to hyperconverged infrastructure as a private cloud option to achieve those goals vs traditional private cloud or cloud-in-a-box options.

5 Things to Think about with Hyperconverged Infrastructure

1. Simplicity

A Hyperconverged infrastructure (or HCI) should take no more than 30 minutes to go from out of the box to creating VM’s. Likewise, an HCI should not require that the systems admin be a VCP, a CCNE, and a SNIA certified storage administrator to effectively manage it. Any properly designed HCI should be able to be administered by an average windows admin with nearly no additional training needed. It should be so easy that even a four-year-old should be able to use it…

2. VSA vs. HES

In many cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see HCI vendors choosing to simply virtualize a SAN controller into each node in their architectures and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IO’s having to pass multiple times through VMs in the system and adjacent systems. This approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) consume so much CPU and RAM that they redefine inefficient – especially in the mid-market. In one case I can think of, a VSA running on each server (or node) in a vendor’s architecture BEGINS its RAM consumption at 16GB and 8 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do. With a different vendor, the VSA reserves around 50GB RAM per node on their entry point offering, and over 100GB of RAM per node on their most common platform – a 3 node cluster reserving over 300 GB RAM just for IO path overheadAn average SMB to mid-market customer could run their entire operation in just the CPU and RAM resources these VSA’s consume.

There is a better alternative called the HES approach. It eliminates the dedicated servers, storage protocol overhead, resource consumption, multi-layer object files, filesystem nesting, and associated gear by moving the hypervisor directly into the OS of a clustered platform as a set of kernel modules with the block level storage function residing alongside the kernel in userspace, completely eliminating the SAN and storage protocols (not just virtualizing them and replicating copies of them over and over on each node in the platform). This approach simplifies the architecture dramatically while regaining the efficiency originally promised by Virtualization.

3. Stack Owners vs. Stack Dependents

Any proper HCI should not be stack dependent on another company for it’s code. To be efficient, self-aware, self-healing, and self-load balancing, the architecture needs to be holistically implemented rather than piecemealed together by using different bits from different vendors. By being a stack owner, an HCI vendor is able to do things that weren’t feasible or realistic with legacy virtualization approaches. Things like hot and rolling firmware updates at every level, 100% tested rates on firmware vs customer configurations, 100% backwards and forwards compatibility between different hardware platforms – that list goes on for quite a while.

4. Using Flash Properly Instead of as a Buffer

Several HCI vendors are using SSD and Flash only (or almost only) as a cache buffer to hide the very slow IO path’s they have chosen to build based on VSAs and Erasure Coding (formerly known as software RAID 5/6/X) used between Virtual Machines and their underlying disks – creating what amounts to a Rube Goldberg machine for an IO path (one that consumes 4 to 10 disk IO’s or more for every IO the VM needs done) rather than using Flash and SSD as proper tiers with an AI based heat mapping and QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers and dynamically allocate flash as needed on the fly to workloads that demand it (up to putting the entire workload in flash). Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of Flash or Solid State. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

5. Future Proofing Against the “Refresh All, Every 5 Years” Spiral

Proper HCI implements self-aware bi-directional live migration across dissimilar hardware. This means that the administrator is not boat anchored to a technology “point in time of acquisition”, but rather, they can avoid over buying on the front end, and take full advantage of Moore’s law and technical advances as they come and the need arises. As lower latency and higher performance technology comes to the masses, attaching it to an efficient software stack is crucial in eliminating the need the “throw away and start over ” refresh cycle every few years.

*Bonus* 6. Price 

Hyperconvergence shouldn’t come at a 1600+% price premium over the cost of the hardware it runs on. Hyperconvergence should be affordable – more so than the legacy approach was and VSA based approach is by far.

These are just a few points to keep in mind as you investigate which Hyperconverged platform is right for your needs

Behind the Scenes: Architecting HC3

Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

The Whole Enchilada

With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.

Storage

At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.

Hypervisor

Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.

Backup/DR

Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.

Management

By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.

Summary

Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

Groundhog Day

Today is Groundhog Day, a holiday celebrated in the United States and Canada where the length of the remaining winter season is predicted by a rodent. According to folklore, if it is cloudy when a groundhog emerges from its burrow on this day, then the spring season will arrive early, some time before the vernal equinox; if it is sunny, the groundhog will supposedly see its shadow and retreat back into its den, and winter weather will persist for six more weeks. (Wikipedia)

Today the groundhog, Punxsutawney Phil, saw his shadow. Thanks, Phil.

alt text

Groundhog Day is also the name of a well-loved film starring Bill Murray (seen above) where his character, Phil, is trapped in some kind of temporal loop repeating the same day over and over. I won’t give the rest away for anyone who has not seen the movie, but it got me thinking. What kind of day would you rather have to live over and over as an IT professional? I’m guessing it does not include the following:

  • Manually performing firmware and software updates to your storage system, server hardware, hypervisor, HA/DR solution, or management tools.
  • Finding out one of your solution vendor updates broke a different vendor’s solution.
  • Having to deal with multiple vendor support departments to troubleshoot an issue none of them will claim responsibility for.
  • Dealing with downtime caused by a hardware failure.
  • Having to recover a server workload from tape or cloud backup.
  • Having to deal with VMware licensing renewals.
  • Thanklessly working all night to fix an issue and only receiving complaints about more downtime.

These are all days none of us want to live through even once, right? But of course, many IT professionals do find themselves reliving these days over and over again because they are still using the same old traditional IT infrastructure architecture that combines a number of different solutions into a fragile and complex mess.

At Scale Computing we are trying to break some of these old cycles with simplicity, scalability, and affordability. We believe, and our customers believe, that infrastructure should be less of a management and maintenance burden in IT. I encourage you to see for yourself how our HC3 virtualization platform has transformed IT with both video and written case studies here.

We may be in for six more weeks of winter but we don’t need to keep repeating some of the same awful days we’ve lived before as IT professionals. Happy Groundhog Day!