With an infrastructure refresh on the horizon, a common question asked in IT used to be:
“What should I buy today that will meet my storage demand over the next X years?”
Historically, that is because IT groups needed to purchase today what they would need 3-5 years from now in order to push out a painful forklift upgrade that would inevitably come with reaching max capacity in a monolithic storage array. After the introduction of “scale-out” storage (where you were no longer locked into the capacity limitations of a single physical storage array), the question then became:
“What should I buy today that will meetgrow alongside my storage demand over the next X years?”
This meant that customers could buy what they needed for storage today knowing that they could add to their environment to scale-out the storage capacity and performance down the road. There were no forklift upgrades or data migrations to deal with. Instead, it offered the seamless scaling of storage resources to match the needs of the business.
Now with hyperconverged solutions like HC3 where the scale-out architecture allows users to easily add nodes to infrastructure to scale out both the compute and storage, the question has changed yet again. Hyperconverged customers now ask themselves:
“What should I buy today that will meetgrow alongside my storageinfrastructure demand over the next X years?”
Adding nodes to HC3 is simple. After racking and plugging in power/networking, users simply assign an IP address and initialize the node. HyperCore (HC3’s ultra-easy software) then takes over from there seamlessly aggregating the resources of that node in with the rest of the HC3 cluster. There is no disruption to the running VMs. In fact, the newly added spindles are immediately available to the running VMs giving an immediate performance boost with each node added to the cluster. Check out the demo below to see HC3’s scalability in action!
There was a recent article on Network Computing regarding the Next Generation Data Center that got me thinking about our SMB target customer and the next generation server room. Both the enterprise and the SMB face the influx of traffic growth described in the article (clearly at different levels, but an influx none-the-less). So how will the SMB cope? How will an IT organization with the limited resources of time and money react? By focusing on Simplicity in the infrastructure.
Elimination of Legacy Storage Protocols through Hypervisor Convergence
There is an ongoing trend to virtualize workloads in the SMB that traditionally meant adding a SAN or a NAS to provide shared storage for high availability. With the introduction of Hypervisor Converged architectures through products like Scale’s HC3, that requirement no longer exists. In this model, end users can take advantage of the benefits of high availability without the complexity that comes with legacy storage protocols like iSCSI or NFS. Not only does this reduce the management overhead of the shared storage, it also simplifies the vendor support model dramatically. In the event of an issue, a single vendor can be called for support with no ability to place the blame on another component in the stack.
Simplicity in Scaling
Moore’s Law continues to hold as better, faster and cheaper equipment becomes available year after year. By implementing a scale-out architecture in the infrastructure, IT organizations can take advantage of this by purchasing what they need today knowing that they can purchase equipment at tomorrow’s prices to scale-out the resources when the need arises. Combined with the ability to mix and match hardware types in a hypervisor converged model also means that users have granularity in their scaling to match the requirements of the workloads at that time (such as adding a storage-only node in HC3 to a compute cluster to scale out only the storage resources). Continue reading →
Many of Scale’s HC3 customers are coming to us from a traditional Do-It-Yourself virtualization environment where they combined piecemeal parts including VMware’s hypervisor to create a complex solution that provides the high availability expected in their infrastructure. Fed up with the complexity (or more often the vTax on a licensing renewal) associated with that setup, they eventually find HC3 as a solution to provide the simplicity, scalability and high availability needed at an affordable price.
I just returned from the Midmarket CIO Forum last week where 98% of the CIOs I spoke to had implemented some form of the VMware environment described above (the other 2% were Hyper-V, but the story of vTax still rang true!). We met with 7 boardrooms full of CIOs who all reacted the same to the demo of HC3: “This sounds too good to be true!” To which I like to reply, “Yeah, we get that a lot.” 🙂
After the initial shock of seeing HC3 for the first time, pragmatism inevitably takes over. The questions then became, “How do I migrate from VMware to HC3?” or “How can I use HC3 alongside my existing VMware environment?” I spent the majority of my week talking through the transition strategies we have seen from some of the 600+ HC3 customers when migrating from VMware to HC3 VMs (V2V process). Continue reading →
“Good news everyone!” HC3x has just been announced. For the last few months, we have internally referred to this platform under the code name “MegaFonzie.” Those of you familiar with Futurama probably know that Mega Fonzies are units used to determine how cool someone is (hence the picture of Professor Farnsworth) …and HC3x is off the charts! If your response is, “Balderdash…I’ll be the judge of what’s cool” then grab your cool-o-meter and let’s walk through this new hardware together.
I am excited to announce that Scale has officially moved ICOS 4.2 out of beta and into limited availability (meaning that our support team can upgrade customers for use in a production environment)! The theme for this release was more advanced networking functionality and included features such as:
Support for VLAN tagging
Support for adding multiple network interface cards (NICs) to VMs
Connect or disconnect network interface cards (NICs) on VMs
In the video below, I walk through the simple setup of a VM to VM private network which highlights these features.
For more information on this release, please see the release notes which can be found on the partner portal and customer portal. If you have any questions or would like to see a demo of this new functionality, please give us a call!
The Cloud is cool. It’s the latest thing! Everyone wants to touch it and have access to it. All the big vendors make stuff that supposedly delivers the Cloud to you. But…what’s the Cloud? I have probably met more entrepreneurs over the past 5 years that were doing Cloud Computing, or Cloud Storage, or building a Cloud Provider, or providing Cloud Services or Apps running in the Cloud, or building infrastructure for the Cloud than in any other technology area.
Two things are usually missing when you ask about them: who is the actual customer and does he need your cool, new Cloudy-thing? And, how will YOU make money so you can sustain your business?
A few posts ago we walked through the process of creating a new VM on HC3. One thing you may have noticed is that nowhere in the process did we specify a physical location for that VM to be created, and even when we powered it on, once again we did not specify a physical location for that VM to run. We just created it and turned it on. Beyond the simplicity that presents, that highlights some very important concepts that make HC3 uniquely scalable and flexible.
In the last post, we discussed how HC3 VM virtual hard disk are actually stored as files (qcow2) format, and even how we can access those files for advanced operations. But once again, we didn’t discuss (or need to discuss) where the data for those files physically resided.
We will dig into this much further but the real magic of HC3 is that all nodes share access to a single pool of storage – all the disks in all nodes of the cluster are pooled together as a common resource for storing and retrieving data. That happens automatically when the HC3 cluster is “initialized” and requires no additional management by the user, even as additional nodes are added to the resource pool down the line.
Because all HC3-capable nodes read and write data using the entire pool of storage and all HC3 nodes have access to all the data, any virtual machine can be started on any node of the HC3 cluster based on the availability of compute resources that VM requires. For example, if a new VM requires 16GB RAM there may only be certain nodes with that much currently available and HC3 makes this node selection automatically. HC3 allows running VMs to be “live migrated” to other HC3 nodes without the VM being shutdown and with no noticeable impact to the workload being run or clients connecting to it. In addition, should an HC3 node fail, any VMs that were running on that node will be quickly re-started on remaining nodes since every HC3 node has access to the same pool of redundant storage. Once again since the storage is available to all nodes in the HC3 system, the primary factor for determining where to failover VMs is availability of compute resources required by each workload and HC3 determines the optimal location automatically.
For those who are more visual (like me) the following diagram may help to picture this more clearly.
In the next post of this series, we will dive into the HC3 distributed storage layer and detail how data is stored across all the physical disks in the cluster providing data redundancy should a disk drive fail and aggregating the I/O performance of all the drives in the system.
As a follow up to my last post, Virtualization So Easy Even a Four-Year-Old Can Do It, I want to continue to focus on the simplicity that virtualization can and should be. Yet explaining what virtualization actually is, can be a complicated task to anyone not in technology. The hypervisor splits the computers. Huh? They are virtual machines! That just sounds like bad 3D from the 90’s. There are many machines in one. That just sounds like too much information (and awkward). You as an IT professional should be able to explain to your grandma not only what you do, but virtualization as well. Let me share the best ways I’ve learned over the years for doing just that. Continue reading →
In the last post of this series, we walked through the process of creating a VM on the HC3 platform. In just a few clicks, you create a virtual machine container and allocate CPU, memory, and storage resources to that VM. When you start the VM and install your operating system and applications, that storage is presented as virtual hard disks and will appear like a virtual c:\ or d:\ drive to your applications.
I have heard something out in the market a few times lately, something that really bothers me. What I’ve heard is a new way for our competitors to try to marginalize us with our customers. It goes something like this: “Scale is a great solution if you don’t have much budget for virtualization. But if you do have the budget, you should go for the ‘premium solution’ from the name brand vendors.” I.e. traditional servers + SAN + storage switching + virtualization software suite. We usually hear HP, Dell,IBM or even Cisco servers along with EMC, Netapp, or other storage along with VMware. Continue reading →
Buy Now Form Submission
Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.