Tag Archives: scale computing

The Price Is Right

The Price Is Right is of the longest running game show on television and one of the most beloved. I grew up watching it hosted by Bob Barker and it is still going today, hosted by Drew Carey. The show features a variety of challenges for players but most of them involve guessing at the retail price of various products ranging from groceries all the way up to vehicles and vacation packages. The concept of guessing at prices reminded me of shopping for IT solutions.

Continue reading

City Government Runs on HC3

City governments face unique unique IT challenges, supporting a number of departments ranging from emergency services to parks and recreation. With limited budgets, these organizations look at technology to reduce the costs of the services they provide. Hyperconverged infrastructure is a great fit for city governments because it not only can be implemented at a low cost, but the cost savings continues through reduced operational and management costs.

But don’t just take it from us. These three videos let our customers speak for themselves about HC3 hyperconverged infrastructure.

City of St. Cloud

City of West Allis

City of Noblesville

HC3 is a great choice for any IT organization looking to modernize for simplicity, scalability, availability, and disaster recovery. Our customer’s success is our success at Scale Computing.  We want to help you be successful too. Let us know how we can help.

TSANet Member Spotlight

This week we were pleased to have our Scale Computing Support Team featured in the TSANet Member Spotlight! We don’t really talk about our own support team enough and how awesome they are here at Scale Computing, maybe because if we did, they are so awesome that we’d be talking about them all the time. So, it is nice when someone like TSANet takes the time to highlight how great they really are.

So, rather than try to tell you in my own words, let me give you the link to the Spotlight feature and a couple snippets below. Click the image below for the link.

TSANet interviewed Blake Rodier, Technical Support Manager, Dave Demlow, Vice President of Product Management and Support, and Lynanne Gibel, Director of Support and Professional Services at Scale Computing.

“Our support renewal rate is around 93%. That says something about the support organization. We consider ourselves as a part of the product. A lot of our customers want to come back because of the support they receive and I consider that a huge acknowledgement for our team,” said Lynanne.

 

Backup is No Joke

Today is World Backup Day and a reminder to everyone about how important it is to backup your data. Why today? What better day than before April Fools Day to remember to be prepared for anything. You don’t want to be the fool who didn’t have a solid backup plan.

But what is a backup? Backing up business critical data is more complex than many people realize which may be why backup and disaster recovery plans fall apart in the hour of need. Let’s start with the basic definition: A backup is a second copy of your data you keep in case your primary data is lost or corrupted. Pretty simple. Unfortunately, that basic concept is not nearly enough to implement an effective backup strategy.  You need some additional considerations.

  1. Location – Where is your backup data stored? Is it on the same physical machine as your primary data? Is it in the same building? The closer your backup is to the primary data, the more chance your backup will suffer the same fate as your primary data. The best option is to have your backup offsite, physically removed from localized events that might cause data loss.
  2. Recovery Point Objective – If you needed to recover from your backup, how much recent data would you lose? Was your last backup taken an hour ago, a day ago, or a week ago? How much potential revenue could be lost along with the data you can’t recover? Taking backups as frequently as possible is the best way to prevent data loss.
  3. Recovery Time Objective – How long will it take to recover your data? If you are taking backups every hour but it takes you several hours or longer to recover from a backup, was the hourly backup effective? Recovery time is as important as recovery point. Have a plan for rapid recovery.
  4. System Backup – For a long time, backups only captured user and application data. Recovery was painful because the OS and applications needed to be rebuilt before restoring the data. These days, entire servers are usually what is backed up, increasing recovery speed.
  5. Multiple Points in Time – Early on, many learned the hard way that keeping one backup is not enough. Multiple backups from different points in time were required for a number of reasons. Sometimes backups failed, sometimes data needed to be recovered from further back in time, and for some businesses, backups need to be kept for years for compliance. The more backups, the more points in time that data can be recovered from.
  6. Backup Storage – One of the greatest challenges to backup over the decades has been storage. Keeping multiple copies of your data quickly starts consuming multiples of storage space. It just isn’t economical to require 10x or more of the storage of your primary data for backup. Incremental backups, compression, and deduplication have helped but backups still take lots of space. Calculating the storage requirements for your backup needs is essential.

Are snapshots backups? Sort of, but not really. Snapshots do provide recovery capabilities within a local system, but generally go down with the ship in any kind of real disaster. That being said, many backup solutions are designed around snapshots and use snapshots to create a real backup by copying the snapshot to an offsite location. These replicated snapshots are indeed backups that can be used for recovery just like any other form of backup.

Over the decades, there have been a variety of hardware, software, and service-based solutions to tackle backup and recovery. Within the last decade, there has been an increasing movement to include backup and recovery capabilities within operating systems, virtualization solutions, and storage solutions. This movement of turning backup into a feature rather than a secondary solution has only been gaining momentum.

With the hyperconvergence movement, where virtualization, servers, storage, and management are brought together into a single appliance-based solution, backup and disaster recovery are being included as well. Vendors like Scale Computing are providing all of the backup and disaster recovery capabilities you need. Scale Computing even offers their own cloud-based DRaaS as an option.

So today, on the eve of April Fools Day, let’s remember that backup is no joke. Businesses rely on data and it is our job as IT professionals to protect against the loss of that data with backup. Take some time to review your backup plans and find out if you need to be doing more to prevent the next data loss event lurking around the corner.

Scale Computing – A Year in Review 2016

It’s that time of the year again. December is winding to a close and the new year is almost here. Let me first say that we here at Scale Computing hope 2016 was a positive year for you and we want to wish you a wonderful 2017. Now, though, I’d like to reflect back on 2016 and why it has been such an outstanding year for Scale Computing.

“And the award goes to…”

Scale Computing was recognized a number of times this year for technology innovation and great products and solutions, particularly in the midmarket. We won awards at both the Midsize Enterprise Summit and the Midmarket CIO Forum, including Best in Show and Best Midmarket Strategy. Most recently, Scale Computing was honored with an Editor’s Choice Award by Virtualization Review as one of the most-liked products of the year. You can read more about our many awards in 2016 in this press release.

Scenes from the 2016 Midsize Enterprise Summit

 

News Flash!

2016 was the year Scale Computing finally introduced flash storage into our hyperconverged appliances. Flash storage  has been around for awhile now but the big news was in how we integrated it into the virtualization infrastructure. We didn’t use any clunky VSA models with resource-hogging virtual appliances. We didn’t implement it as a cache to make up for inefficient storage architecture. We implemented flash storage as a full storage tier embedded directly into the hypervisor. We eliminated all the unnecessary storage protocols that slow down other flash implementations. In short, we did it the right way. Oh, and we delivered it with our own intelligent automated tiering engine called HEAT. You can read more about it here in case you missed it.

Newer, Stronger, Faster

When we introduced the new flash storage in the HC3, we introduced three new HC3 appliance models, the HC1150, HC2150, and HC4150–significantly increasing speed and capacity in the HC3. We also introduced the new HC1100 appliance to replace the older HC1000 model, resulting in a resource capacity increase of nearly double over the HC1000. Finally, we recently announced the preview of our new HC1150D that doubles the compute over the HC1150 and introduces a higher capacity with support of 8TB drives. We know your resource and capacity needs grow overtime and we’ll keep improving the HC3 to stay ahead of the game. Look for more exciting announcements along these lines in 2017.

Going Solo

In 2016, hyperconvergence with Scale Computing HC3 was opened up to all sorts of new possibilities including the new Single Node Appliance Configuration. Where before you needed at least three nodes in an HC3 cluster, now you can go with the SNAC-size HC3. (Yes, I am in marketing and I make up this corny stuff). The Single Node allows extremely cost effective configurations that include distributed enterprise (small remote/branch offices), backup/disaster recovery, or just the small “s” businesses in the SMB. Read more about the possibilities here.

 

Cloud-based DR? Check

2016 was also the year Scale Computing rolled out a cloud-based disaster recovery as a service (DRaaS) offering called ScaleCare Remote Recovery Service. This is an exclusive DR solution for HC3 customers that want to protect their HC3 workloads in a secure, hosted facility for backup and disaster recovery. With per-monthly billing, this service is perfect for organizations that can’t or don’t want to host DR in their own facilities and who value the added services like assisted recovery and DR testing. Read more about this DRaaS solution here.

Better Together

2016 has been an amazing year for technology partnerships at Scale Computing. You may have seen some of the various announcements we’ve made the past year. These include Workspot whom we’ve partnered with for an amazingly simple VDI solution, Information Builders whom we partnered with for a business intelligence and analytics appliance, Brocade who we’ve most recently joined in the Brocade Strategic Collaboration Program to expand the reach of hyperconvergence and HC3, and more. We even achieved Citrix Ready certification this year. Keep an eye out for more announcements to come as we identify more great solutions to offer you.

The Doctor is In

It wouldn’t be much of a new year celebration without a little tooting of my own horn, so I thought I’d mention that 2016 was the year I personally joined Scale Computing, along with many other new faces. Scale Computing has been growing this year. I haven’t properly introduced myself in a blog yet so here it goes. My name is David Paquette, Product Marketing Manager at Scale Computing, and they call me Doctor P around here (or Dr. P for short). It has been a fantastic year for me having joined such a great organization in Scale Computing and I am looking forward to an amazing 2017.  Keep checking our blog for my latest posts.  

Just me, Dr. P

 

Sincerely, from all of us at Scale Computing, thank you so much for all of the support over the past year. We look forward to another big year of announcements and releases to come. Of course, these were just some of the 2016 highlights, so feel free to look back through the various blog posts and press releases for all of the 2016 news.

Happy New Year!

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

Scale Computing Keeps Storage Simple and Efficient

Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.

At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex.  In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?

Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.

Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs.  These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.

VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads.  To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.

The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability.  This pooling also allows for complete flexibility of storage usage across all nodes.  The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.

To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O.  By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.

We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest.  We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.

Screenshot 2016-04-19 13.07.06

The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.

Screenshot 2016-07-13 09.34.07

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

Video: How to Add Resources to HC3

With an infrastructure refresh on the horizon, a common question asked in IT used to be:

“What should I buy today that will meet my storage demand over the next X years?”

Historically, that is because IT groups needed to purchase today what they would need 3-5 years from now in order to push out a painful forklift upgrade that would inevitably come with reaching max capacity in a monolithic storage array.  After the introduction of “scale-out” storage (where you were no longer locked into the capacity limitations of a single physical storage array), the question then became:

“What should I buy today that will meet grow alongside my storage demand over the next X years?”

This meant that customers could buy what they needed for storage today knowing that they could add to their environment to scale-out the storage capacity and performance down the road.  There were no forklift upgrades or data migrations to deal with.  Instead, it offered the seamless scaling of storage resources to match the needs of the business.

Now with hyperconverged solutions like HC3 where the scale-out architecture allows users to easily add nodes to infrastructure to scale out both the compute and storage, the question has changed yet again.  Hyperconverged customers now ask themselves:

“What should I buy today that will meet grow alongside my storage infrastructure demand over the next X years?”

Adding nodes to HC3 is simple.  After racking and plugging in power/networking, users simply assign an IP address and initialize the node.  HyperCore (HC3’s ultra-easy software) then takes over from there seamlessly aggregating the resources of that node in with the rest of the HC3 cluster.  There is no disruption to the running VMs.  In fact, the newly added spindles are immediately available to the running VMs giving an immediate performance boost with each node added to the cluster.  Check out the demo below to see HC3’s scalability in action!

 

 

SMB IT Challenges

There was a recent article that focused on the benefits that city, state and local governments have gained from implementing HyperConvergence (Side Note: for anyone interested in joining, it was brought to my attention on a new HyperConvergence group on LinkedIn where such articles are being posted and discussed).  The benefits cited in the article were:

  • Ease of management,
  • Fault tolerance,
  • Redundancy, and late in the article…
  • Scalability.

I’m sure it isn’t surprising given our core messaging around Scale’s HC3 (Simplicity, High Availability and Scalability), but I agree wholeheartedly with the assessment.

It occurred to me that the writer literally could have picked any industry and the same story could have been told.  When the IT Director from Cochise County, AZ says:

“I’ve seen an uptick in hardware failures that are directly related to our aging servers”,

It could just as easily have been the Director of IT at the manufacturing company down the street.  Or when the City of Brighton, Colorado’s Assistant Director of IT is quoted as saying,

“The demand (for storage and compute resources) kept growing and IT had to grow along with it”,

That could have come out of the mouth of just about any of the customers I talk to each week. Continue reading