Tag Archives: HC3

Manufacturing Runs on HC3

Manufacturing is at the heart of any economy. The industrial revolution continues to roll forward with continued innovation and automation. The role of IT in manufacturing is increasing in every aspect of manufacturing from design to shipping. Manufacturers require reliable, powerful IT infrastructure systems to implement and maintain efficient manufacturing operations.

Scale Computing’s HC3 hyperconverged infrastructure solution is ideal for manufacturing companies that need powerful IT infrastructure without complexity. Complexity equals cost in IT and the added cost of complexity from traditional IT solutions drives up manufacturing costs. HC3 makes IT infrastructure simple, scalable, highly available, and affordable. In order for manufacturers to remain competitive in modern global markets, they need modern IT infrastructure solutions like HC3.

KIB Electronics is just one manufacturer that has chosen HC3 for IT infrastructure.

Many other manufacturers have chosen HC3 and have also chosen to share their success stories. You can see those stories by clicking on the manufacturers below.

Penlon

Midwest Acoust-A-Fiber

Cascade Lumber Company

Poster Display

Hydradyne

Whether it is industry specific applications like ERP and CAD, or general IT applications like messaging, finance, CRM, and office tools, HC3 is a flexible, scalable, and easy-to-use platform that supports them all. HC3 is modern infrastructure for modern manufacturing.

 

The Price Is Right

The Price Is Right is of the longest running game show on television and one of the most beloved. I grew up watching it hosted by Bob Barker and it is still going today, hosted by Drew Carey. The show features a variety of challenges for players but most of them involve guessing at the retail price of various products ranging from groceries all the way up to vehicles and vacation packages. The concept of guessing at prices reminded me of shopping for IT solutions.

I’m sure most of you know what I am talking about. You start researching various hardware and software solutions but you quickly find that the price is not readily available. You have to contact the vendor for pricing. Why? Often they can’t even give you a ballpark estimate. Why? The answer is simple, but awful. They want to charge you the highest price possible and the only way to do that is withhold pricing until they have sufficiently worked you over with a double whammy of sales and marketing.

IT is a cost center. We all accept this. Organizations don’t want to spend any more on IT than is necessary, but it is necessary, at least to a point. These vendors want to artificially build up that need for more and more before they hit you with a price because they want you to spend more.

Personally, I hate this practice of withholding pricing. I want to have an idea of what a solution costs up front when I am researching. I don’t need a sales guy smooth talking me to soften the blow of the price. I’m an adult. I know how money works. This practice is all too common in IT solution sales. That’s why I love Scale Computing. We are different.

Did you see what I did there? Pricing for our HC3 systems. Not all the pricing. We have a lot of configuration options and it would be a feat of engineering to try to show everything. Base pricing to give you a starting point. Pricing that includes 1 year of maintenance and support. Why are we different? Well, we just think our pricing is fair to begin with. We don’t want you to have to guess. Don’t guess. Those are per node prices and we gave you a couple examples to get you started. We just want you to get a great solution at a great price.

Can you afford it? We will work with you to get you exact pricing on the configuration you need and nothing more. We can do an assessment of what you need and show you some of the costs of integration, management, maintenance, and support that come with or without our HC3 solution. If the numbers don’t add up, that’s fine. We won’t sell you a solution that you can’t afford, don’t want, or won’t work for you. We think you will want it and probably can afford it. In fact, you might find out that you can’t afford NOT to have it.

By the way, that pricing is available in our HC3 Sales Brochure right on our website. For more information on some of the tertiary costs of IT ownership, check out this white paper, “How HC3 Lowers the Total Cost of Infrastructure”.

5 Reasons to Refresh IT Infrastructure

Nothing lasts forever, especially IT infrastructure technology. In the ever-evolving world of IT hardware and software, change is inevitable. Right now, organizations all over the world are seeing the signs they need to refresh, whether they know it or not. End of Life or End of Support are obvious reasons, but what other reasons are being realized or possibly ignored?

#1 – Performance becomes a pain.

IT is an engine that drives business. Over time, increased load and decreased efficiency take their toll on the performance of the IT engine. Performance issues can be dealt with in a number of ways such seeking out and fixing specific hardware performance bottlenecks to improving processes for greater efficiency. As with an automobile, there is a breaking point where the cost of fixing the problems outweighs the cost of repair. Performance issues can be notoriously hard to diagnose without expertise. For that reason, the pain of performance issues often demands IT infrastructure refresh.

#2 – Datacenters need consolidation.

For a variety of reasons, sometimes datacenters need consolidation. The needs may include acquisition, restructuring, or relocating but it may not make sense to consolidate what exists. Often different sites and even different departments use different infrastructure technologies for no particular reason. While it is possible to coexist a number of different SAN/NAS devices and hypervisor architectures, it adds a load of complexity to the combined datacenter. This is a classic opportunity for infrastructure refresh to eliminate complexity

#3 – Capacity is limited.

When you bought your SAN you had space for extra shelves for growth but you filled that faster than expected. Capacity planning isn’t an exact science, not even close. You could add another storage device and cobble the two together but there might be a better way in scale-out architecture.  Software-defined scale-out storage built into hyperconverged infrastructure solutions offers very easy and cost-effective scaling, on-demand. Refreshing with hyperconvergence is not only going to help scale storage effectively but also scale RAM and CPU as needed, in a much simpler solution.

#4 – Alice doesn’t work here anymore.

Remember Alice? She was the out-sourced IT consultant who designed and built the current infrastructure with her own unique set of expertise? Well, her fees for continuing to fix and manage the solution whenever there was an issue was just too expensive. No one else can quite figure out how anything is supposed to work because she expertly put together some older gear that isn’t even supported any more. It works for now, thanks to Alice, but if something goes wrong, then what? It could be time to start over and refresh the IT infrastructure with something newer that staff IT administrators can deal with.

#5 – Costs are examined.

IT is a cost center. The challenge is to get the most benefit from that cost. Older technologies like SANs, whether physical or virtual, are costly and were never designed for virtualization in the first place. Hypervisors and management solutions like VMware vSphere come at a high cost in ongoing software licensing fees. There is the cost of integrating all of the storage, servers, virtualization and management software, not to mention backup/DR. Then finally the cost of expertise to manage and maintain these systems. The costs of this traditional virtualization architecture can be overwhelming. Avoiding these high costs is a primary reason IT professionals are looking at technologies like cloud and hyperconvergence to simplify IT.

Whatever the reason for an IT infrastructure refresh, it is an opportunity to lower costs, increase productivity, and plan for future growth. This is why so many are considering hyperconverged infrastructures like HC3 from Scale Computing. It dramatically simplifies IT infrastructure, lowers costs, and allows seamless scaling for future growth. While some workloads may be destined for the cloud, the HC3 virtualization platform provides a simple, secure, and highly available solution for all of your on-prem datacenter needs. If it is time to refresh, take a look at everything HC3 provides and all the complexity it eliminates.

Why HC3 IT Infrastructure Might Not Be For You

Scale Computing makes HC3 hyperconverged infrastructure appliances and clusters for IT organizations around the world with a focus on simplicity, scalability, and availability. But HC3 IT infrastructure solution might not be for you for a few reasons.

  • You want to be indispensable for your proprietary knowledge.

You want to be the only person who truly understands your IT Infrastructure. Having designed your infrastructure personally and managing it with your own home-grown scripts, only you have the knowledge and expertise to keep it running. Without you, your IT department is doomed to fail.

HC3 is probably not for you. HC3 was designed to be so simple to use that it can be managed by even a novice IT administrator. HC3 would not allow you to control the infrastructure with proprietary design and secret knowledge that only you could possess. Of course, if you did go with HC3, you’d be a pioneer of new technology who would be an ideal asset for any forward thinking IT department.

  • You are defined by your aging certifications.

You worked hard and paid good money to get certifications in storage systems, virtualization hypervisors, server hardware, and even disaster recovery systems that are still around. You continue to use these same old technologies because you are certified in them, and that gives you leverage for higher salary. Newer technologies hold less interest because they wouldn’t allow you to take advantage of your existing certifications.

HC3 is probably not for you. HC3 is based on new infrastructure architecture that doesn’t require any expensive certifications. Any IT administrator can use HC3 because it was designed to remove reliance on legacy technologies that were too complex and required excessive expertise. HC3 won’t allow you to leverage your certifications in these legacy technologies. Of course, with all of the management time you’d save using HC3, you’d be able to learn new technologies and expand your skills beyond infrastructure.

  • You like going to VMworld every year.

You’ve been using VMware and going to VMworld since 2006 and it is a highlight of your year. You always enjoy reuniting with VMworld regulars and getting out of the office. It isn’t as useful as it was earlier on but you still attend a few sessions along with all of the awesome parties. Life just wouldn’t be the same without attending VMworld.

HC3 is probably not for you. HC3 uses a built-in hypervisor, alleviating the need for VMware software and VMware software licensing. Without VMware, you probably won’t be able to justify your trip to VMworld as a business expense. Of course, with all the money you will likely save going with HC3, your budget might be open to going to even more conferences to help you develop new skills and services to help your business grow even faster.

  • You prefer working late nights and weekends.

The office and better yet, the data center, are a safe place for you. Whether you don’t have the best home life or you prefer to avoid awkward social events, you find that working late nights and weekends doing system updates and maintenance a welcome prospect. We get it. Real life can be hard. Solitude along with the humming of fans and spinning disks offers an escape from the real world.

HC3 is probably not for you. HC3 is built to eliminate the need to take systems offline for updates and maintenance tasks so these can be done at any time, including during normal business hours. HC3 doesn’t leave many infrastructure tasks that need to be done late at night or on weekends. Of course, if you did go with HC3, you’d probably have more time to and energy to sort out your personal life and make your home and your social life more to your liking.

Summary

HC3 may not be for everyone. When change is difficult to embrace, many choose to stick with the way it has always been done. For others, however, emerging technologies like HC3 are a way to constantly evolve with architecture that lowers costs with simplicity, scalability, and availability for modern IT.

Behind the Scenes: Architecting HC3

Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

The Whole Enchilada

With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.

Storage

At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.

Hypervisor

Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.

Backup/DR

Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.

Management

By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.

Summary

Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

Scale with Increased Capacity

2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.

First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.

Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.

Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.

Screenshot 2016-12-15 09.07.43

Scale Computing is committed to giving our customers the best virtualization infrastructure on the market and we will keep integrating greater capacity and computing power into our HC3 appliances. Our focus of simplicity, scalability, and availability will continue to drive our innovation to make IT infrastructure more affordable for you. Look for more announcements to come.

Screenshot 2016-07-13 09.34.07

HyperCore v6 – A Closer Look at HC3’s New User Interface

They said it couldn’t be done! Scale has taken the easiest HyperConverged user interface and somehow made it simpler in HyperCore v6. HC3 offers a “set it and forget it” style to IT infrastructure. If we intend for our customers to forget about our product, the user interface has to be extremely intuitive when there is an event that requires an administrator to log in to the system (new VM/workload request, verifying an already “self-healed” hardware failure, etc.).

HyperCore v6 User Interface – Key Features

  • Streamlined workflows for administrators – 60% reduction in clicks during the VM creation workflow. Quicker access to VM consoles directly from Heads up Display (HUD).
  • New Intuitive Design – With the intelligence of HyperCore handling the heavy lifting of VM failover and data redundancy, administrators often employ a “set it and forget it” mentality where it is only required that they log in periodically to make changes to the system. This requires an intuitive interface with almost no learning curve.
  • Improved Responsiveness – The new HyperCore User Interface is extremely responsive with state changes and VM updates immediately accessible in the UI.
  • Tagging / Grouping – Users can now combine VMs into logical groups via tagging. Set multiple tags for easy filtering.
  • Filtering – Spotlight search functionality that filters VMs based on matching names, descriptions, tags for quick and easy access to VMs in larger environments.
  • Cluster Log – A single source for all of the historical activity on the cluster. Filter alerts by type or search for specific key words using the spotlight search to track historical data on the cluster.
  • UI Notification System – Pop up notifications that display in process user actions, alerts and processes present users with relevant information about active events on the system.
  • Unified Snapshot/Cloning/Replication Functionality – Snapshot, cloning and replication functionality are now integrated into the card view of each VM for easy administration.

 

User Interface Demonstrations

Anyone can say that they have a simple user interface, but it doesn’t count unless you can see that simplicity in action. Check out the demonstrations below:

Creating a VM on HC3 – HyperCore v6

Cloning a VM on HC3 – HyperCore v6

Snapshot a VM on HC3 – HyperCore v6

HyperCore v6 – A Closer Look at Built-in Remote Disaster Recovery

As you saw in last week’s press release, Scale Computing’s HC3 now includes VM-level replication as a key new feature in HyperCore v6. Administrators can now setup replication on a per VM basis for integrated remote Disaster Recovery, which builds on the already unique snapshot and cloning functionality built into HyperCore v5. Since the introduction of HyperCore v5, users could manually take near-instant, VM-level snapshots that are easily cloned in an extremely space efficient manner (“thin clones”).

Now in version 6, HyperCore allows users to set up continuous replication to a secondary HC3 cluster, which will automatically take a snapshot on the selected VMs, moving only the unique blocks to the remote site.

Then, to restore on the secondary cluster, simply clone the VM from the latest (or previous) automated or manual snapshot. It makes disaster recovery testing a breeze to be able to spin up these VMs quickly and on their own private network. Of course, if this isn’t a test and your VM at the secondary site is now production, HC3 continues to track the unique blocks that are created and ONLY sends those blocks back to the primary site when its time to fail back.

Replication Highlights:

  • Continuous VM-level Replication – HyperCore makes use of its space efficient snapshot technology to replicate to a secondary site, tracking only the blocks unique to each snapshot and sending the change blocks.
  • Low RPO/RTO – Simply “clone” a snapshot on the target cluster for the manual failover of a VM that is immediately bootable.
  • Simple Disaster Recovery Testing – Testing a DR infrastructure plan is now as simple as cloning a snapshot on the target cluster and starting a VM. No disruption to ongoing replication.
  • Easy Failback after Disaster Recovery – After running a VM at the DR site, simply replicate the changed data back to the primary site for simple failback.

Bring on the Demo!

There is nothing quite like a demonstration of this new technology. In this video you’ll see a number of things….

  1. Remote Connection Setup (0:08) – You’ll see me create a connection from my primary cluster (left) to a secondary cluster (right). Once the clusters are securely connected, I can then enable replication on any VMs between those two clusters.
  2. Replication Setup (0:40) and Initial Replication (1:05) – After cloning a VM, you’ll see me set up replication on that VM to the secondary cluster. The initial replication is time-lapsed, but you’ll see the progress on the snapshot view in on the Primary cluster (left) and after it completes, the clone-able snapshot on the secondary cluster.
  3. Failover Test 1 (1:38) Automated Snasphot – I clone the VM from the snapshot, which is immediately bootable. That’s about as easy as it gets for DR testing!
  4. Failover Test 2 (1:58) Manual Snapshot – After making some changes to the VM (“replication” file on the desktop), I create a manual snapshot. Notice that the blocks unique to that snapshot are tracked separately from the initial replication snapshot (3:32). When I clone from the manual snapshot, you’ll see the “replication” text file appear on the desktop. DR plan tested again!
  5. Failback (4:30) – After making changes to the cloned VM on the secondary site (“Replication – Rollback”), I simply set up replication on the cloned VM back to the primary cluster. Since the majority of the data already exists at the primary site, it takes almost no time for my minor changes to replicate back. Once there, I simply clone the snapshot and I’m back in action on the primary cluster. (Note: Here (5:23) I also disconnect the NIC to spin this VM up without conflicting with my actual production VM…a nice trick for that DR testing!).

HyperCore v5 –A Closer Look at One Click Rolling Upgrades

As noted in a previous post, HyperCore v5 is now Generally Available and shipping on all HC3 node types. In this “A Closer Look…” series of blog posts we’ll be going through the majority of these new features in more detail. Today’s topic…One Click Rolling Upgrades:

  • Non-Disruptive / Rolling Updates ‒ HC3 clusters can be upgraded with no downtime or maintenance window. Workloads are automatically live-migrated across the HC3 appliance to allow for node upgrades, even if such an upgrade requires a node-reboot. Workloads are then returned to the node after the upgrade is complete.

 

Included in HyperCore v5 is our one click rolling upgrade feature…and our customers love it! Customers covered under ScaleCare – our 24×7 support offering that covers both hardware and software – are alerted of HyperCore updates as they become generally available via the user interface. There is nothing more to license when new updates become available, which means that as new features are developed for HyperCore, our current users can take full advantage of them.

When a user kicks off an upgrade, this sets into motion a series of events that updates the nodes in the cluster while keeping the VMs up and running throughout the process. The upgrade starts with a single node by live migrating the VMs from that node to the other nodes in the cluster. Keep in mind that the best practice is to keep enough available resources available to tolerate a node failure in your environment. This same concept holds true for rolling upgrades and users are alerted if they do not meet this condition (and prevented from upgrading if true).

Insufficient Free Memory

After the VMs are live migrated off of the first node, the full OS stack is updated, rebooting the node if required. Once that node is brought back online and has rejoined the cluster, the VMs are then returned to their original position and the upgrade process moves on to node 2 repeating the process. This continues through each node in the cluster until the system in its entirety is running the latest code. No VM downtime, no maintenance window required.

 

HyperConvergence for the SMB

Scott D. Lowe authored a fantastic article on HyperConverged.org last week that focused on where HyperConvergence is NOT a fit.  It is not an angle you hear often from a proponent of HyperConvergence and I have to admit…I like it.

At Scale, we have a laser-like focus on serving the IT infrastructure needs of small-to-medium sized businesses.  Similar to Scott Lowe’s approach in his article, it is as important to define our target customer as it is to define who is NOT our target customer.  When it comes down to it, a large company who has IT employees that specialize in every component in the infrastructure (think SAN or network admin, etc.) may never fully appreciate the simplicity of HC3 or may even be somewhat threatened by it. Continue reading