Tag Archives: ICOS

HC3x: Introducing Scale Computing’s all performance SAS product line

“Good news everyone!” HC3x has just been announced.  For the last few months, we have internally referred to this platform under the code name “MegaFonzie.”  Those of you familiar with Futurama probably know that Mega Fonzies are units used to determine how cool someone is (hence the picture of Professor Farnsworth) …and HC3x is off the charts!  If your response is, “Balderdash…I’ll be the judge of what’s cool” then grab your cool-o-meter and let’s walk through this new hardware together.

Scale Computing ICOS 4.2: Support for VLAN tagging

I am excited to announce that Scale has officially moved ICOS 4.2 out of beta and into limited availability (meaning that our support team can upgrade customers for use in a production environment)!  The theme for this release was more advanced networking functionality and included features such as:

  • Support for VLAN tagging
  • Support for adding multiple network interface cards (NICs) to VMs
  • Connect or disconnect network interface cards (NICs) on VMs

In the video below, I walk through the simple setup of a VM to VM private network which highlights these features.

For more information on this release, please see the release notes which can be found on the partner portal and customer portal.  If you have any questions or would like to see a demo of this new functionality, please give us a call!

HC3 Under The Hood: Virtual Machine Placement and Failover

A few posts ago we walked through the process of creating a new VM on HC3. One thing you may have noticed is that nowhere in the process did we specify a physical location for that VM to be created, and even when we powered it on, once again we did not specify a physical location for that VM to run.  We just created it and turned it on.  Beyond the simplicity that presents, that highlights some very important concepts that make HC3 uniquely scalable and flexible.

In the last post, we discussed how HC3 VM virtual hard disk are actually stored as files (qcow2) format, and even how we can access those files for advanced operations. But once again, we didn’t discuss (or need to discuss) where the data for those files physically resided.

We will dig into this much further but the real magic of HC3 is that all nodes share access to a single pool of storage – all the disks in all nodes of the cluster are pooled together as a common resource for storing and retrieving data. That happens automatically when the HC3 cluster is “initialized” and requires no additional management by the user, even as additional nodes are added to the resource pool down the line.

Because all HC3-capable nodes read and write data using the entire pool of storage and all HC3 nodes have access to all the data, any virtual machine can be started on any node of the HC3 cluster based on the availability of compute resources that VM requires. For example, if a new VM requires 16GB RAM there may only be certain nodes with that much currently available and HC3 makes this node selection automatically. HC3 allows running VMs to be “live migrated” to other HC3 nodes without the VM being shutdown and with no noticeable impact to the workload being run or clients connecting to it. In addition, should an HC3 node fail, any VMs that were running on that node will be quickly re-started on remaining nodes since every HC3 node has access to the same pool of redundant storage.  Once again since the storage is available to all nodes in the HC3 system, the primary factor for determining where to failover VMs is availability of compute resources required by each workload and HC3 determines the optimal location automatically.

For those who are more visual (like me) the following diagram may help to picture this more clearly.

ICOSstack

 

In the next post of this series, we will dive into the HC3 distributed storage layer and detail how data is stored across all the physical disks in the cluster providing data redundancy should a disk drive fail and aggregating the I/O performance of all the drives in the system.

HC3 Under the Hood: Virtual Hard Disks and CD/DVD Images

In the last post of this series, we walked through the process of creating a VM on the HC3 platform.  In just a few clicks, you create a virtual machine container and allocate CPU, memory, and storage resources to that VM. When you start the VM and install your operating system and applications, that storage is presented as virtual hard disks and will appear like a virtual c:\ or d:\ drive to your applications.

HC3 Virtual Disks in Windows Device Manager
HC3 Virtual Disks in Windows Device Manager

Continue reading

“Mikey likes it!” – Windows 2008 R2 VM in less than a minute

As a child of the 80’s, it’s hard not to smile thinking back on the classic Life cereal commercial “Mikey likes it!”  For those of you who don’t remember, the commercial starts with two boys leery of trying a new cereal that is supposedly good for them.  Instead, they decide to use Mikey –  a boy I assume to be their little brother – as a guinea pig.  “He won’t eat it. He hates everything.”  The boys stare in disbelief watching Mikey take a bite. And, as you can guess from the title of the commercial, Mikey likes it! I was reminded of this last week while on the road at a reseller event and thought others might enjoy sharing in the nostalgia. Continue reading

HC3 Under the Hood: Creating a VM

In the last post of this series, we talked about how multiple independent HC3 nodes are joined together into a cluster that is managed like a single system with a single pool of storage and compute resources, as well as built-in redundancy for high availability.

For a quick review, the end-user view of this process is as simple as racking the nodes, connecting them to power and your network, giving them IP addresses, and assigning them to join a cluster.

You might expect after that there should be any number of steps required to configure individual disks into RAID arrays and spares. Then provision storage targets, sharing protocols and security, both physically and logically connect each shared storage resource to each compute resource with multiple redundant paths. Ultimately, configure a hypervisor to use that raw storage to create partitions and file systems to store individual data objects, such as virtual disks. Those would be the next steps with virtually ANY other system available.

Well, you don’t have to do any of that with HC3 because the storage layer is fully integrated with the compute hardware and virtualization software layers – all managed by the system.  Ah, management. So maybe now it’s time to install and configure a separate VM management server and management client software on your workstation to oversee all the virtualization hosts and software? Again, not with HC3 since the management layer is built-in and accessible simply by pointing your web browser to the cluster and logging in.

With HC3, you go right from configuring each node as a member of the HC3 system to pointing a web browser to HC3. And in a few clicks, you have created your first HC3 virtual machine.

This is definitely something that is best to see with your own eyes (or better yet, ask us for a demo and we will let YOU drive!). The HC3 system in this video already has a number of VMs running, but the process you will see here is exactly the same for the very first VM you create.

Creating a virtual machine is a simple process that is accomplished through the Scale Computing HC3 Manager web browser user interface. Selecting the ‘Create’ option from the ‘Virtualization’ tab allows the user to specify required and optional parameters for the virtual machine including:

• Number of virtual CPU cores

• RAM

• Number and size of virtual disks to create; and to

• Attach or upload a virtual DVD/CD ISO image for installing an operating system.

Creating the virtual machine not only persists those VM configuration parameters that will later tell the hypervisor how to create the virtual machine container when it is started, but it also physically creates files using the distributed storage pool that will contain the virtual hard disks to present to the VM once it is started. For maximum flexibility, we create those files inside a default storage pool container called VM to present a file/folder structure for organizing virtual machine files.  HC3 Virtual Machines are able to access their virtual hard disk files directly as if they are local disks, without the use of any SAN or NAS protocols, and can access those virtual disks from any node of the HC3 cluster – which is the key to capabilities like VM Live Migration and VM failover from node to node.

In the next post, we will dig into how HC3 virtual disks files are actually stored. As well as how you can optionally use the external storage protocol capabilities of HC3 to access and browse HC3 storage pools for VMs and ISO media image from remote machines.

 

What is Hyperconvergence?

Hyperconvergence is a term that sort of crept up on the market and has since stuck. It’s used to describe products like our HC3.  But what does hyperconvergence actually mean?

Active blogger and technologist Stevie Chambers wrote a well-thought article in which he defined hyperconvergence as an extension of the overall convergence trend, collapsing the datacenter into an appliance form factor. This is certainly true of the solutions that are available today. However, I believe he missed a key point (perhaps intentionally, as Stevie was in the CTO group at VCE when that blog was written). Continue reading

Under The Hood: HC³ In Action – Cluster Formation

Previous posts in this series have discussed the ease of use and high availability design goals of the HC³ platform, as well as the hardware and high level software architecture. Now, lets roll up our sleeves and walk through how ICOS (Intelligent Clustered Operating System) takes a set of independent compute nodes with independent storage devices and aggregates them into a single pool of compute and storage resources that are managed as a single, redundant, highly available system.

Once the Scale HC³ cluster nodes are racked and cabled and configured with physical network connectivity, the cluster formation process takes multiple nodes (currently 3 or more) and logically bonds them together to act as a single coordinated system in a process that completes in a matter of minutes. Continue reading

Under the Hood – HC3 Architectural Design Goals

The first two posts of this series discussed the high availability and ease of use requirements that went into the design of HC3.  With those overarching user needs as a backdrop, we will now transition into a more technical look under the hood at the hardware and software aspects of the HC3 system.

HC3 and the ICOS (Intelligent Clustered Operating System) that it runs on were designed to put intelligence and automation into the software layer allowing the system to provide advanced functionality, flexibility and scalability using low cost hardware components, including the virtualization capabilities built into modern CPU architectures.  Rather than “scaling up” with larger, more expensive hardware that also requires equally expensive idle “standby capacity” to operate in the event of a failure, HC3 was designed to aggregate compute and storage resources from multiple systems into a single logical system with redundancy and availability designed in. Continue reading