All posts by David Paquette

5 Reasons to Refresh IT Infrastructure

Nothing lasts forever, especially IT infrastructure technology. In the ever-evolving world of IT hardware and software, change is inevitable. Right now, organizations all over the world are seeing the signs they need to refresh, whether they know it or not. End of Life or End of Support are obvious reasons, but what other reasons are being realized or possibly ignored?

#1 – Performance becomes a pain.

IT is an engine that drives business. Over time, increased load and decreased efficiency take their toll on the performance of the IT engine. Performance issues can be dealt with in a number of ways such seeking out and fixing specific hardware performance bottlenecks to improving processes for greater efficiency. As with an automobile, there is a breaking point where the cost of fixing the problems outweighs the cost of repair. Performance issues can be notoriously hard to diagnose without expertise. For that reason, the pain of performance issues often demands IT infrastructure refresh.

#2 – Datacenters need consolidation.

For a variety of reasons, sometimes datacenters need consolidation. The needs may include acquisition, restructuring, or relocating but it may not make sense to consolidate what exists. Often different sites and even different departments use different infrastructure technologies for no particular reason. While it is possible to coexist a number of different SAN/NAS devices and hypervisor architectures, it adds a load of complexity to the combined datacenter. This is a classic opportunity for infrastructure refresh to eliminate complexity

#3 – Capacity is limited.

When you bought your SAN you had space for extra shelves for growth but you filled that faster than expected. Capacity planning isn’t an exact science, not even close. You could add another storage device and cobble the two together but there might be a better way in scale-out architecture.  Software-defined scale-out storage built into hyperconverged infrastructure solutions offers very easy and cost-effective scaling, on-demand. Refreshing with hyperconvergence is not only going to help scale storage effectively but also scale RAM and CPU as needed, in a much simpler solution.

#4 – Alice doesn’t work here anymore.

Remember Alice? She was the out-sourced IT consultant who designed and built the current infrastructure with her own unique set of expertise? Well, her fees for continuing to fix and manage the solution whenever there was an issue was just too expensive. No one else can quite figure out how anything is supposed to work because she expertly put together some older gear that isn’t even supported any more. It works for now, thanks to Alice, but if something goes wrong, then what? It could be time to start over and refresh the IT infrastructure with something newer that staff IT administrators can deal with.

#5 – Costs are examined.

IT is a cost center. The challenge is to get the most benefit from that cost. Older technologies like SANs, whether physical or virtual, are costly and were never designed for virtualization in the first place. Hypervisors and management solutions like VMware vSphere come at a high cost in ongoing software licensing fees. There is the cost of integrating all of the storage, servers, virtualization and management software, not to mention backup/DR. Then finally the cost of expertise to manage and maintain these systems. The costs of this traditional virtualization architecture can be overwhelming. Avoiding these high costs is a primary reason IT professionals are looking at technologies like cloud and hyperconvergence to simplify IT.

Whatever the reason for an IT infrastructure refresh, it is an opportunity to lower costs, increase productivity, and plan for future growth. This is why so many are considering hyperconverged infrastructures like HC3 from Scale Computing. It dramatically simplifies IT infrastructure, lowers costs, and allows seamless scaling for future growth. While some workloads may be destined for the cloud, the HC3 virtualization platform provides a simple, secure, and highly available solution for all of your on-prem datacenter needs. If it is time to refresh, take a look at everything HC3 provides and all the complexity it eliminates.

Education Runs on HC3

Educational institutions are constantly challenged to keep up with technological innovation. Students need access to modern technology to prepare for a  competitive job market and institutions need technology to remain efficient and effective. As virtualization technology solutions offered by VMware and Microsoft have proven to be complex and costly, educational institutions have begun switching to hyperconverged infrastructure solutions like HC3 from Scale Computing to modernize.

Educational institutions have all, if not more, of the needs of businesses when it comes to IT. They are implementing common office applications, messaging services, virtual desktops, hosting web services for students and parents, and supporting more specialized educational applications. On top of these challenges, they also carry the burden of restricted budgets. Expensive and complex multi-vendor IT infrastructure solutions employing SANs, servers, hypervisors, and backup/DR solutions leave little budget for growth or innovation.

HC3 combines the storage, server, hypervisor, and backup/DR into a single appliance-based solution. HC3 not only brings all of these elements together, but also makes the solution highly available with fully automated clustering. Where HC3 really makes the grade with education is both the simplicity that requires far less management from IT staff and the low price which makes HC3 a lower cost than the VMware and Hyper-V alternatives.

The simplicity of HC3 is perfect for educational institutions that can often employ only minimal IT staff, sometimes only part-time. Without the ability to employ full-time, highly-trained, (and highly paid) staff, the more complex IT infrastructures around VMware and Hyper-V become a burden and institutions are often forced to seek help from outside paid consultants. HC3 provides a turn-key infrastructure solution that enables institutions to comfortably utilize their existing staff at a significant savings.

From primary to higher education, Scale Computing is meeting IT needs with high marks. Here are a few of the educational institutions that have chosen HC3. Click on any one of them to view the HC3 case study.

Iron County Schools

American College of Education

Reading Muhlenberg Career & Technology Center

St. Richard’s Catholic College

Auburn University

Triton School Corporation

Anamosa School District

Standard School District

GEO Foundation

Stoke Park School and Community Technical College

Toccoa Falls College

Summer vacation may be coming soon for students but educational institutions are already planning ahead for the next school year. HC3 is providing both simplicity and peace of mind for those organizations that have already made the switch away from VMware and Hyper-V.

3 Reasons Businesses Fail to Innovate in IT

In today’s markets, businesses need IT solutions to stay competitive and reach customers to drive sales and revenue. New technologies are constantly being developed to help businesses achieve greater efficiency and lower costs.

Despite the availability of these technology innovations, many businesses lag behind the technology curve, continuing to use older, more inefficient solutions. Some smaller businesses simply may not be aware of the latest technology solutions, but for most, there are other more sinister flaws in decision making that prevent  IT innovation.

Here are 3 reasons that businesses hold themselves back from innovating in IT.

1 . The Sunk Cost Fallacy

Rather than try to explain the sunk cost fallacy completely, I’ll instead just ask you to Google “sunk cost fallacy” and read one of the many in depth descriptions you will find, if you are not already familiar with it. To summarize it briefly, it is the idea that if you have made a significant investment in a solution, you must continue using that solution because of the investment. It is basically using the rear window when you are trying to drive business forward. That investment is history.

Everyone wants to get the most out of investment but in the technology game, newer better solutions come along all the time. Making the decision to hold onto a solution when there is a better solution available may well end up being a bigger cost than having switched. For example, you might invest in a storage solution that you plan and budget to use for the next 4 years. After 2 years, you may have both a need and opportunity to go to a better solution but you might delay because of the sunk costs of the “4 year” investment.

2. The Rotating Budget Cycle

This is related to the sunk cost fallacy but is more related to adhering to a certain budgeting schedule. IT assets can be expensive and sometimes replacing them gets budgeted on a cycle and the cycles for different types of assets do not align.  

For example, you may have  purchased some server assets one year, the next year you purchased a storage solutions, and the next year you purchased some software solutions.  You create a budget cycle where you are only purchasing part of the overall solution per year. This way of planning may help adhere to steady annual budget that is easier to account for and then you begin repeating this on a 3 year cycle. 

But what if one year when you were supposed to buy servers, you want to buy a combined solution that includes both servers and storage?  Are you able to adjust the budget to accommodate for the extra spending in that year or have you locked yourself out of that prospect and you instead have to wait until the following year because that is when the storage budget is available.

3. The Brand Name Game

When making any purchase, it is good to know something about the vendor, especially when there is support and warranty involved. It’s good to make sure the vendor is not a sham or a business that will be shuttering in a year or two.  But is a brand name a safety net in this case? 

The technology industry is known for innovation arising from little know startups as much or even more than from established brands. While small startups may fail or are at risk of being sucked into an abyss through acquisition, it is also not uncommon for large brand name products to be discontinued or brand name vendors to be acquired.  Where brand name means a certain level of consistency and longevity in other markets, it is much less the case in technology solutions. By shopping only brand names, IT organizations are putting about as much though into buying IT technology as they would buying a hair dryer.

Smaller vendors with lesser known brands can be more of an asset than a liability in IT solutions because they may offer more personalized services, more responsive support, and more innovative solutions. With older, larger, more recognized vendors, you may be treated as just one of tens of thousands of customers who are shuffled through the support and service queues. The small startup that was passed over because of the lack of brand recognition often becomes the next big brand that replaces the solution you went with.

Summary

When it comes to technology innovation, it is important to stay open to new and emerging technologies to solve your immediate challenges. I know some IT professionals have their headphones on listening to “The Way We’ve Always Done It” by Zero Innovation, but many others are starting to get it. Continuing to use less efficient solutions may not only be slowing you down, it may be costing you more in the long run.

Why HC3 IT Infrastructure Might Not Be For You

Scale Computing makes HC3 hyperconverged infrastructure appliances and clusters for IT organizations around the world with a focus on simplicity, scalability, and availability. But HC3 IT infrastructure solution might not be for you for a few reasons.

  • You want to be indispensable for your proprietary knowledge.

You want to be the only person who truly understands your IT Infrastructure. Having designed your infrastructure personally and managing it with your own home-grown scripts, only you have the knowledge and expertise to keep it running. Without you, your IT department is doomed to fail.

HC3 is probably not for you. HC3 was designed to be so simple to use that it can be managed by even a novice IT administrator. HC3 would not allow you to control the infrastructure with proprietary design and secret knowledge that only you could possess. Of course, if you did go with HC3, you’d be a pioneer of new technology who would be an ideal asset for any forward thinking IT department.

  • You are defined by your aging certifications.

You worked hard and paid good money to get certifications in storage systems, virtualization hypervisors, server hardware, and even disaster recovery systems that are still around. You continue to use these same old technologies because you are certified in them, and that gives you leverage for higher salary. Newer technologies hold less interest because they wouldn’t allow you to take advantage of your existing certifications.

HC3 is probably not for you. HC3 is based on new infrastructure architecture that doesn’t require any expensive certifications. Any IT administrator can use HC3 because it was designed to remove reliance on legacy technologies that were too complex and required excessive expertise. HC3 won’t allow you to leverage your certifications in these legacy technologies. Of course, with all of the management time you’d save using HC3, you’d be able to learn new technologies and expand your skills beyond infrastructure.

  • You like going to VMworld every year.

You’ve been using VMware and going to VMworld since 2006 and it is a highlight of your year. You always enjoy reuniting with VMworld regulars and getting out of the office. It isn’t as useful as it was earlier on but you still attend a few sessions along with all of the awesome parties. Life just wouldn’t be the same without attending VMworld.

HC3 is probably not for you. HC3 uses a built-in hypervisor, alleviating the need for VMware software and VMware software licensing. Without VMware, you probably won’t be able to justify your trip to VMworld as a business expense. Of course, with all the money you will likely save going with HC3, your budget might be open to going to even more conferences to help you develop new skills and services to help your business grow even faster.

  • You prefer working late nights and weekends.

The office and better yet, the data center, are a safe place for you. Whether you don’t have the best home life or you prefer to avoid awkward social events, you find that working late nights and weekends doing system updates and maintenance a welcome prospect. We get it. Real life can be hard. Solitude along with the humming of fans and spinning disks offers an escape from the real world.

HC3 is probably not for you. HC3 is built to eliminate the need to take systems offline for updates and maintenance tasks so these can be done at any time, including during normal business hours. HC3 doesn’t leave many infrastructure tasks that need to be done late at night or on weekends. Of course, if you did go with HC3, you’d probably have more time to and energy to sort out your personal life and make your home and your social life more to your liking.

Summary

HC3 may not be for everyone. When change is difficult to embrace, many choose to stick with the way it has always been done. For others, however, emerging technologies like HC3 are a way to constantly evolve with architecture that lowers costs with simplicity, scalability, and availability for modern IT.

Backup is No Joke

Today is World Backup Day and a reminder to everyone about how important it is to backup your data. Why today? What better day than before April Fools Day to remember to be prepared for anything. You don’t want to be the fool who didn’t have a solid backup plan.

But what is a backup? Backing up business critical data is more complex than many people realize which may be why backup and disaster recovery plans fall apart in the hour of need. Let’s start with the basic definition: A backup is a second copy of your data you keep in case your primary data is lost or corrupted. Pretty simple. Unfortunately, that basic concept is not nearly enough to implement an effective backup strategy.  You need some additional considerations.

  1. Location – Where is your backup data stored? Is it on the same physical machine as your primary data? Is it in the same building? The closer your backup is to the primary data, the more chance your backup will suffer the same fate as your primary data. The best option is to have your backup offsite, physically removed from localized events that might cause data loss.
  2. Recovery Point Objective – If you needed to recover from your backup, how much recent data would you lose? Was your last backup taken an hour ago, a day ago, or a week ago? How much potential revenue could be lost along with the data you can’t recover? Taking backups as frequently as possible is the best way to prevent data loss.
  3. Recovery Time Objective – How long will it take to recover your data? If you are taking backups every hour but it takes you several hours or longer to recover from a backup, was the hourly backup effective? Recovery time is as important as recovery point. Have a plan for rapid recovery.
  4. System Backup – For a long time, backups only captured user and application data. Recovery was painful because the OS and applications needed to be rebuilt before restoring the data. These days, entire servers are usually what is backed up, increasing recovery speed.
  5. Multiple Points in Time – Early on, many learned the hard way that keeping one backup is not enough. Multiple backups from different points in time were required for a number of reasons. Sometimes backups failed, sometimes data needed to be recovered from further back in time, and for some businesses, backups need to be kept for years for compliance. The more backups, the more points in time that data can be recovered from.
  6. Backup Storage – One of the greatest challenges to backup over the decades has been storage. Keeping multiple copies of your data quickly starts consuming multiples of storage space. It just isn’t economical to require 10x or more of the storage of your primary data for backup. Incremental backups, compression, and deduplication have helped but backups still take lots of space. Calculating the storage requirements for your backup needs is essential.

Are snapshots backups? Sort of, but not really. Snapshots do provide recovery capabilities within a local system, but generally go down with the ship in any kind of real disaster. That being said, many backup solutions are designed around snapshots and use snapshots to create a real backup by copying the snapshot to an offsite location. These replicated snapshots are indeed backups that can be used for recovery just like any other form of backup.

Over the decades, there have been a variety of hardware, software, and service-based solutions to tackle backup and recovery. Within the last decade, there has been an increasing movement to include backup and recovery capabilities within operating systems, virtualization solutions, and storage solutions. This movement of turning backup into a feature rather than a secondary solution has only been gaining momentum.

With the hyperconvergence movement, where virtualization, servers, storage, and management are brought together into a single appliance-based solution, backup and disaster recovery are being included as well. Vendors like Scale Computing are providing all of the backup and disaster recovery capabilities you need. Scale Computing even offers their own cloud-based DRaaS as an option.

So today, on the eve of April Fools Day, let’s remember that backup is no joke. Businesses rely on data and it is our job as IT professionals to protect against the loss of that data with backup. Take some time to review your backup plans and find out if you need to be doing more to prevent the next data loss event lurking around the corner.

Is Hyperconvergence Here to Stay?

As virtualization continues to evolve into hyperconvergence, cloud, and other technologies, both IT professionals and analysts are looking into the future to predict the velocity, direction, and longevity of these technologies. Business leaders want their spending decisions to be based on the latest intel and technologies that will carry their business into the future.

Analysts George J. Weiss and Andrew Butler from Gartner have put together their predictions on the future of hyperconvergence in this recent report: Prepare for the Next Phase of Hyperconvergence

Scale Computing has been a leader in hyperconvergence, helping define the current evolution of hyperconvergence with innovative storage architecture and unsurpassed simplicity.  Whatever the future holds for the evolution of hyperconvergence, Scale Computing plans to be at the forefront of hyperconvergence innovation.  We expect to continue delivering the simplicity, scalability, and availability our customers have come to expect from hyperconvergence.

One Customer’s Experience With Scale Computing

At Scale Computing, we do our best not only to build the best solutions for our customers, but also to explain why our solutions really are the best to those still deciding on a solution. In reality, no one can explain it as well as one of our actual customers.

This week we have the opportunity to share the Scale Computing experience of Nathan Beam of Bridgetree in his own words, on his own blog. Here is the link:

Simply Hyper-converged – An Overview of Scale Computing’s Easy-To-Use HC3 Virtualization Platform

Just to pull a quick quote: “My own experience and pretty much that of every other customer testifies to the fact that we all love our product. I searched long and hard trying to find unhappy owners of HC3 equipment… to this day I still don’t know if any exist.”

We look forward to sharing more of our user experiences with you in the future. If you are another HC3 user who wants to share your story here, contact me: dpaquette@scalecomputing.com. To see some of our other customer success stories, check out our case studies. For additional customer reviews, check out our page on the Spiceworks Community.

4 Lessons from the AWS Outage Last Week

The Amazon Web Services (AWS) Simple Storage Service (S3) experienced an outage on Tuesday last week and was down for several hours. S3 is object storage for around 150,000 websites and other services according to SimilarTech. For IT professionals, here are four takeaways from this outage.

#1 – It Happens

No infrastructure in immune to outages. No matter how big the provider, outages happen and downtime occurs. Whether you are hosting infrastructure yourself or relying on a third party, outages will happen eventually.  Putting your eggs in someone else’s basket does not necessarily buy you any more peace of mind. In this case, S3 was brought down by a simple typo from a single individual. That is as little as it takes to cause so much disruption. The premiums you pay to be hosted on a massive infrastructure like AWS will never prevent the inevitable failures, no matter how massive any platform becomes.

#2 – The Bigger They Are, the Harder They Fall

When a service is as massive as AWS, problems affect millions of users like customers trying to do businesses with companies using S3. Yes, outages do happen but do they have to take down so much of the internet with them when they do?  Like the DDOS attack I blogged about last fall, companies leave themselves open to these massive outages when they rely heavily on public cloud services. How much more confidence in your business would your customers have if they heard about a massive outage on the news but knew that your systems were unaffected?

#3 – It’s No Use Being an Armchair Quarterback

When an outage occurs with your third party provider, you call, you monitor, and you wait. You hear about what is happening and all you can do is shake your fist in the air knowing that you probably could have done better to either prevent the issue or resolve it more quickly if you were in control. But you aren’t in any position to do anything because you are reliant on the hoster. You have no option but to simply accept the outage and try to make up for the loss to your business. You gave up your ability to fix the problem when you gave that responsibility to someone else.

Just two weeks ago, I blogged about private cloud and why some organizations feel they can’t rely on hosted solutions because of any number of failures they would have no control over. If you need control of your solution to mitigate risk, you can’t also give that control to a third party.

#4 – Have a Plan

Cloud services are a part of IT these days and most companies are already doing some form of hybrid cloud with some services hosted locally and some hosted in the cloud. Cloud-based applications like Salesforce, Office365, and Google Docs have millions of users. It is inevitable that some of your services will be cloud-based, but they don’t all have to be. There are plenty of solutions like hyperconverged infrastructure to host many services locally with the simplicity of cloud infrastructure. When outages at cloud providers occur, make sure you have sufficient infrastructure in place locally so that you can do more than just be an armchair quarterback.

Summary

Public cloud services may be part of your playbook but they don’t have to be your endgame. Take control of your data center and have the ability to navigate your business through outages without being at the mercy of third party providers. Have a plan, have an infrastructure, and be ready for the next time the internet breaks.

When Cloud is Not what You Signed Up For

The AWS S3 outage on Tuesday confirmed the worst fears of many that bigger is not better. Three hours of outage for 150,000 or so websites and other services, because of some internal issues at S3. What we saw yet again yesterday was that a massive data center like S3 proved to be no more reliable than private data centers happily achieving five 9’s.

The real issue here is not that there was an outage. The outage was unfortunately just an inevitability that proves no infrastructure is invulnerable. No, the real issue is the perception that a cloud service like AWS can be made too big to fail. Instead, what we saw was that the bigger they are, the harder they fall.

Now, I like public cloud services and I use them often.  In fact, I used Google Docs to type a draft of this very blog post. However, would I trust my business critical data to public cloud? Probably not. Maybe I am old fashioned but I have had enough issues with outages of either internet services or cloud services to make me a believer in investing in private infrastructure.

The thing about public cloud is that it offers simplicity. Just login and manage VMs or applications without ever having to worry about a hard drive failure or a power supply going wonky. And that simplicity comes at a premium with the idea that you will save money by only using what you need without having to over-provision, like you would expect to do with buying your own gear. That seems like wishful thinking to me, because in my experience, managing costs with cloud computing can be a tricky business and it can be a full-time job to make sure you aren’t spending more than you intend.

Is the cost of managing private infrastructure even more? You must buy servers, storage, hypervisors, management solutions, and backup/DR, right? Not anymore. Hyperconverged infrastructure (HCI) is about delivering infrastructure that is pre-integrated and so easy to manage that the experience of using it is the same as using cloud. In fact, just last week I talked about how it really is a private cloud solution.

What is the benefit of owning your own infrastructure? First: Control. You get to control your fate with the ability to better plan for and respond to disaster and failure, mitigating risk to your level of satisfaction. No one wants to be sitting on their hands, waiting, while their cloud provider is supposedly working hard to fix the outage. Second, Cost. Costs are more predictable with HCI and there is less over-provisioning than with traditional virtualization solutions. There are also no ongoing monthly premium costs for the third party who is supposed to be eliminating the risk of downtime.

Cloud just isn’t the indestructible castle in the sky that we were meant to believe it was. Nothing is, but with HCI, you get your own castle and you get to rule it the way you see fit. You won’t be stuck waiting to see if all the king’s horses and all the king’s men can put Humpty back together again.

Is Hyperconvergence the Private Cloud You Need?

If you are an IT professional, you are most likely familiar with at least the term “hyperconvergence” or “hyperconverged infrastructure”. You are also undoubtedly aware of cloud technology and some of the options for public, private, and hybrid cloud.  Still, this discussion merits a brief review of private cloud before delving into how hyperconvergence fits into the picture.

What is a Private Cloud?

The basic premise behind cloud technology is an abstraction of the management of VMs from the underlying hardware infrastructure. In a public cloud, the infrastructure is owned and hosted by someone else, making it completely transparent. In a private cloud, you own the infrastructure and still need to manage it, but the cloud management layer simplifies day-to-day operation of VMs compared to traditional virtualization.

Traditional virtualization is complicated by managing hypervisors running on individual virtual hosts and managing storage across hosts. When managing a single virtual host, VM creation and management is fairly simple. In a private cloud, you still have that underlying infrastructure of multiple hosts, hypervisors, and storage, but the cloud layer provides the same simple management experience of a single host but spread across the whole data center infrastructure.

Many organizations who are thinking of implementing private cloud are also thinking of implementing public cloud, creating a hybrid cloud consisting of both public and privately hosted resources.  Public cloud offers added benefits for pay-per-use elasticity for seasonal business demands and cloud-based applications for productivity.

Why Not Put Everything in Public Cloud?

Many organizations have sensitive data that they prefer to keep onsite or are required to do so by regulation. Maintaining data onsite can provide greater control and security than keeping it in the hands of a third party. For these organizations, private cloud is preferable to public cloud.

Some organizations require continuous data access for business operations and prefer not to risk interruption due to internet connectivity issues. Maintaining systems and data onsite allows these organizations to have more control over their business operations and maintain productivity. For these organizations, private cloud is preferable to public cloud.

Some organizations prefer the Capex model of private cloud vs. the Opex model of public cloud.  When done well, owning and managing infrastructure can be less expensive than paying someone else for hosting. The costs can be more predictable for onsite implementation, making it easier to budget. Private cloud is preferable for these organizations.

How does Hyperconvergence Fit as a Private Cloud?

For all intents and purposes, hyperconverged infrastructure (HCI) offers the same or better experience as a traditional private cloud. You could even go so far as to say it is the next generation of private cloud because it improves on some of the shortcomings of traditional private clouds. The simplicity of managing VMs in HCI is the same as traditional private clouds and brings an even simpler approach to managing the underlying hardware.

HCI is a way of combining the elements of traditional virtualization (servers, storage, and hypervisor) into a single appliance-based solution. With traditional virtualization, you were tasked with integrating these elements from multiple vendors into to working infrastructure and dealing with any incompatibilities and managing with multiple console, etc. HCI is a virtualization solution that has all of these elements pre-integrated into more or less a turnkey appliance. There should be no need to configure any storage, configure any hypervisor installs on host servers, or manage through more than a single interface.

Not all HCI vendors are equal and some rely on third party hypervisors so there are still elements of multi-vendor management, but true HCI solutions own the whole hardware and virtualization stack, providing the same experience as a private cloud. Users are able to focus on creating and managing VMs rather than worrying about the underlying infrastructure.

With the appliance-based approached, hyperconvergence is even easier to scale out than traditional private clouds or even the cloud-in-a-box solutions that also provide some levels of pre-integration. HCI scalability should be as easy as plugging in a new appliance node to a network and telling it to join an existing HCI cluster of appliance nodes.

HCI is generally more accessible and affordable than traditional private clouds or cloud-in-a-box solutions because it can start and then scale out from very small implementations without any added complexity. Small to midmarket organizations who experienced sticker shock at the acquisition and implementation costs of private clouds will likely find the costs and cost benefits of HCI much more appealing.  

Summary

Private cloud is a great idea for any organization whose goals include the control and security of onsite infrastructure and simplicity of day-to-day VM management. These organizations should be looking to hyperconverged infrastructure as a private cloud option to achieve those goals vs traditional private cloud or cloud-in-a-box options.

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×