Category: HC3

The Next-Generation Server Room

There was a recent article on Network Computing regarding the Next Generation Data Center that got me thinking about our SMB target customer and the next generation server room.  Both the enterprise and the SMB face the influx of traffic growth described in the article (clearly at different levels, but an influx none-the-less).  So how will the SMB cope?  How will an IT organization with the limited resources of time and money react?  By focusing on Simplicity in the infrastructure.

Elimination of Legacy Storage Protocols through Hypervisor Convergence

There is an ongoing trend to virtualize workloads in the SMB that traditionally meant adding a SAN or a NAS to provide shared storage for high availability.  With the introduction of Hypervisor Converged architectures through products like Scale’s HC3, that requirement no longer exists.  In this model, end users can take advantage of the benefits of high availability without the complexity that comes with legacy storage protocols like iSCSI or NFS.  Not only does this reduce the management overhead of the shared storage, it also simplifies the vendor support model dramatically.  In the event of an issue, a single vendor can be called for support with no ability to place the blame on another component in the stack.

Simplicity in Scaling

Moore’s Law continues to hold as better, faster and cheaper equipment becomes available year after year.  By implementing a scale-out architecture in the infrastructure, IT organizations can take advantage of this by purchasing what they need today knowing that they can purchase equipment at tomorrow’s prices to scale-out the resources when the need arises.  Combined with the ability to mix and match hardware types in a hypervisor converged model also means that users have granularity in their scaling to match the requirements of the workloads at that time (such as adding a storage-only node in HC3 to a compute cluster to scale out only the storage resources). Continue reading

The vCenter Paradox

A few days ago, when I was preparing to update a HC3 cluster, I had an combination epiphany and flashback. I was thinking about a previous life as a VMware administrator, and the memories of the long nights, complexities, and gray hair that came about whenever an ESX update was required. The epiphany I had about updating a Scale cluster was simple: I don’t have to worry about ensuring that my vCenter host is updated and able to cope with the changes as well. Lets face it, its not impossible to manage an ESX cluster without vCenter, but without it, the principal beauty of ESX, or any clustered virtualization solution, is greatly minimized.

If you think about it, any OS, whether its your phone, your DVR, or your infrastructure servers, need to be updated. Maybe its a security vulnerability, maybe its a new feature, or maybe it is just a new UI, but updates are a fact of life in IT. Windows updates, prior to my tenure managing an ESX environment, were the bane of my existence. Another function of coping with those updates is dealing with the likelihood that a reboot/restart/powercycle is required; some type of outage is likely going to be required.

When you think about vCenter, the requirements for managing your manager of managers (yes, I think I said that right), become even more paramount. Not just because you have to ensure that your virtual host, should you be virtualizing your vCenter server VM, is updated in the proper order so your vCenter instance is safe, but because you have to deal with everything else that vCenter depends on. Some type of database is required, as well as a fairly specific set of patches or versions for those databases. Beyond that, other basic necessities including DNS, time and AD (if you’re using SSO), need to be cared for and properly handled during an update.

Just as much planning, caring, and feeing is given to managing your vCenter Server(s), as is managing your ESX cluster. You haven’t even touched the rest of the environment, the storage, or the network.

That’s just to update it or manage it; I didn’t even reach back into my memory to think about the installation…

How is this making my life easy again?

So, after shaking off the cold sweat of that nightmare, I was staring at a browser, a single interface that didn’t have a complicated set of APIs and backing database to ensure were all up and running.

Scale’s UI: nothing to install or set up, no database to configure, no client software. It is built-in, and accessible anywhere.

As I was preparing to update the cluster, I don’t have to take a maintenance window or notify my users, these updates are rolling…even if a node in the cluster needs a reboot, HyperCore distributes the running VMs elsewhere in the cluster. Zero-downtime updates.

That’s making my life easy.

Who is Scale Computing? Storage Field Day 5 – SFD5

A couple of weeks ago Scale Computing was honored to take part in Storage Field Day 5, hosting 12 delegates at our offices in San Mateo for a 2 hour session covering both our company and our HC3 product. We always enjoy hearing the opinion of passionate technologists and this event exceeded our expectations for engagement both in the room and online.

In the video below, our very own Jason Collier (Twitter: @bocanuts) walked through a brief overview of who we are as a company and why we focus on the SMB market with our HC3 product. If you have questions that weren’t covered in the video, feel free to reach out to us.

Follow up blog from Justin Warren: Scale Computing in the Goldilocks Zone

 

It’s Dangerous to Go Alone! Take This – Scale Computing Net Promoter Score

In preparing for the MES show this week, I was reviewing some of the presentation materials and happened to stop on our ScaleCare slide long enough to see the resemblance between our heart logo and the heart containers from The Legend of Zelda (my favorite game of 1987).

If you’re like me, then you also have the Zelda theme song stuck in your head about now.

I was a loyal gamer and loved Nintendo (the publisher of Zelda). If someone had called me on their rotary dial phone in 1987 to ask “How likely is it that you would recommend Nintendo to a friend” on a scale of 1-10, I would have said 10 without hesitating. I was clearly a “promoter” of Nintendo as were most users of the day.

Net Promoter Score

This one question, as simple as it sounds, is a fantastic measure of the loyalty that exists between a provider (Nintendo in the example above) and a consumer (me). It is the sole question of the Net Promoter Score (NPS) and it is something that we measure on a monthly/quarterly basis at Scale to measure our progress internally as well as our rank among other companies in the industry. Customers respond on a 0 to 10 point rating system and are then categorized into groups based on their answer:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

To calculate Scale’s NPS, take the percentage of customers who are Promoters and subtract the percentage who are Detractors. Simple, right?

World Class NPS

Despite the simplicity, it is actually hard to score well. The average computer company scores somewhere around 29 and the average software company 35. I’m very proud to say that Scale Computing is currently at 75! This puts us among other loved brands such as Amazon.com (76), Trader Joes (73) and Costco (71). Not bad company to keep!

Customer loyalty is important to us and we will continue to strive for a world class Net Promotor Score to reflect the world class products and support we bring to market.

With the Zelda theme song still stuck in my head, I’ll remind you: “It’s Dangerous to go alone! Take this”.

HC3 for existing VMware users

This past week I have been at the MidMarket CIO Forum in beautiful Ponte Vedra Beach, FL.  It is a fun event with very intimate boardroom style sessions that give vendors a chance to sit down with CIOs to discuss industry trends, their current problems and our potential solutions.  In each of our sessions there was at least one CIO saying something along the lines of, “I see the value of HC3, but I have already invested in VMware.  How can you work in my environment?”  This is usually in a lamenting tone after we have described the added cost that they likely paid for the licensing, implementation, hardware (SAN/NAS), and training associated with a traditional Do-it-Yourself virtualization deployment.  (Side story…one potential customer told us his woes of sending a sysadmin off to a week long VMware training only to have him return to leave for another job weeks later.  Ouch!).

Since it came up so often, I thought a quick blog post was warranted in case there are others out there asking the same question.

VMware customers coming to HC3 for their Primary Infrastructure

Many customers come to us having used VMware in the past.  Most have implemented VMware in the traditional “inverted pyramid of doom” style (to steal a spiceworks-ism) with a handful of host servers connected down to shared storage through redundant switches.  Often they come to us when it is time to refresh either a SAN or NAS or when looking to add a new host into their environment (which can push their VMware licensing cost up significantly as they jump from Essentials Plus or another 3 host package into a more enterprise license).  When we talk to potential customers in this situation, it is not uncommon to hear things like “For the price of replacing my SAN, I could have an entire HC3 cluster?” or “For the price of just the licensing, I can put in a new HC3 system?”.  There are several examples of this in our customer success stories that I recommend reading through if interested.

VMware customers purchasing HC3 as a Disaster Recovery Site

Customers who have already made a heavy investment in VMware for their primary site, but still want to take advantage of the simplicity and affordability of HC3 still have an option.  Instead of purchasing and implementing the same VMware environment that they have in place at their primary site, this group of users can implement an HC3 system along side HC3 Availability to replicate data from their primary site to the HC3 system.  In the event of a failure at the primary site, HC3 Availability will detect the failure and can automatically  (or manually if you’d rather) bring up those VMs on HC3.  Here is a video of Dave Demlow walking through the HC3 Availability product which demonstrates the failover process from VMware to HC3:

We have admittedly seen this approach act as a “trojan horse” where users begin with HC3 as a DR target, but fall in love with the simplicity of adding new highly available VMs.  At the time of that next server/SAN refresh cycle, those customers often replace their primary site with HC3 as well.

If you have any questions on making the jump from VMware to HC3, please feel free to reach out to us for more information.

Scale’s HC3 through the lens of a VMware Administrator with David Davis

Recently, I sat down with @davidmdavis of www.virtualizationsoftware.com to discuss Scale’s HC3 and the general trend of Hypervisor Convergence.  David kept the perspective of a VMware administrator coming to HC3 for the first time, which allowed me to highlight the simplicity of HC3 compared to a traditional VMware virtualization deployment.  Hope you enjoy!

HC3 vs VMware vs. Hyper-V for SMBs: part 7 – Rules to Live By

To summarize and conclude this series, I want to offer some “rules” or at least “guidelines” that I believe small and mid-sized businesses should consider when planning for their virtual infrastructure.

First – ensure your design provides data and compute redundancy across multiple “boxes” wherever possible.  And Second –  you can provide for both with a single “mirror” copy of your data simply by putting your “RAID” mirror on a remote server.  These both seem rather obvious but it’s amazing how often these simple principles are ignored … often by putting all the storage inside a single shared SAN “box” that becomes a single point of failure, or creating costly over-replication … using both local RAID with storage overhead to protect against a disk failure and in addition creating additional replicas on different hosts to protect against server failures, often resulting in 4 or more copies of all your data.

Next, be very cautious purchasing “de-featured” or “entry-bundled” software.  Many vendors of software only solutions will have multiple “editions” and bundles that may seem attractively priced until you need to expand and add a new server or activate some new feature they figured you ultimately will need.  Seemingly “expected” things can instantly double your licensing costs such as adding one additional server to a cluster exceeding a “bundle” or wanting to use one “enterprise feature” (perhaps like storage live migration after you’ve maxed out a non-scale out SAN controller and need to move data between multiple SANs).

This may be controversial but I suggest you save admin training days / $’s for applications that create real business value for your company – not basic infrastructure and IT plumbing.  I believe that will benefit you as an IT professional.  Along that line, the level of typical training required as well as install / configuration time of a solution is a good representation of complexity and cost of ongoing maintenance you should expect – something that takes 10x longer to setup and configure is going to take at least 10x more time to maintain, patch and troubleshoot.  Quite honestly maybe more than 10x because installation is a lot more “standard” than keeping a complex, changing, interdependent environment up and running.

Here is one that few will argue yet many ignore – don’t buy today what you think you will need in 3 years.   I understand that many architectures don’t lend themselves to expansion …or perhaps expansion requires using “today’s” CPU model or architecture for compatibility and there are concerns about future availability and costs.  The best strategy is to select an architecture that avoids that (hint, hint) and even then buy just enough to cover your ability to predict your needs.  Three years is way to much for anyone, 6 months to a year may be reasonable for most and fit normal purchasing / budgeting cycles.

Lastly – simplicity is good.  Dealing with fewer vendors that offer standardized modular configurations is way better than assembling the very best but totally customized mouse trap you can.  Not only should you have peace of mind knowing that Scale gives you one place to call, but when you do call we know your exact configuration.   We test it with every software change we make and provide you updates for the entire software stack in one step.

I hope this series has been beneficial and would love to hear any additional “rules” you would like to suggest in the comments.

 

HC3 vs VMware vs. Hyper-V for SMBs: Part 6 – Isn’t Microsoft THE SMB solution?

In the last post of this series we discussed the various software components to purchase, install, manage, patch in a Hyper-V infrastructure.  Now I want discuss the all important data storage options.

As is the case with VMware, for automatic high availability failover among Hyper-V hosts you need some form of shared storage system and storage network, most commonly an iSCSI SAN with redundant controller hardware RAID, multi-path iSCSI networking etc.   Until recently, shared block storage  (SAN) was the only option but Microsoft does now support shared file storage when using their newest SMB 3.0 (server message block) storage protocol available in their most recent server OS’s.  To use this you would basically have to install a pair of recent Windows servers with SMB3 support, in a cluster of their own, sitting in front of a SAN or shared SAS storage so this is definitely NOT about simplification.  With respect to the complexity of external shared storage for Hyper-V, everything I’ve said about VMware in previous posts of this series applies as well, a separate console to manage and monitor storage, multi-step storage provisioning starting at the array creating targets and luns, connecting to those on every host, initializing disks, formatting file systems, etc.  And just like with VMware, you are introducing multiple independent components with different software and firmware versions that have to certified together for interoperability (see the Windows Server Catalog), patches have to be coordinated carefully and of course when there are problems you may need to involve not only Microsoft but the server and storage vendors as well. Continue reading

VMware announces VSAN: Better than a VSA but still a generation behind HC3 with SCRIBE

Click Here To Contact Scale Computing >>

The time has finally come!  Well, the time has been here for a while actually.  VMware’s VSAN was just released out of beta and to much fanfare with their “revolutionary” approach of hypervisor convergence.  In reality, the “revolution” began back in August of 2012 when Scale Computing launched HC3 and we’re excited to see our approach validated.  Like HC3, VMware’s VSAN utilizes the local storage attached to each host to create a shared storage pool presented to all participating nodes in the system.  This approach sets VMware’s VSAN and Scale’s HC3 apart from the other competitors in the convergence space who rely on a VSA model that adds an extra layer of complexity to the equation.  That’s about where the similarity ends, so let’s dive in to where we differ.

The Purchasing and Setup Process: HC3 is easier to buy and implement

With VMware’s VSAN, customers are required to select their own servers. In each server, there is a minimum of one internal Solid State Disk (suggested to be at least 10% of the size of the HDDs) and one Hard Drive from VMware’s limited Hardware Compatibility List. With the hardware in hand, it is left to the user to install the ESX hypervisor, install vCenter (and a SQL Database), activate the appropriate features, create a vCenter cluster and activate VSAN as a datastore.  A lot to ask of the IT generalist who is busy trying to put out fires around the office.

HC3 is sold as a fully integrated system. This means that all software licenses are included, with balanced compute and storage hardware, ready to use out of the box, and designed for scale out expansion. HC3 customers are able to go from unboxing to running highly available Virtual Machines in a matter of hours with no prior training on the hypervisor or a SAN/NAS! Continue reading

The Current Virtualization Channel: Breaking Bad or The Sopranos?

I know, it seems like I’m shamelessly pandering to the demise of one of the great shows on TV in any era, the recently completed Breaking Bad on AMC.  I don’t know why I come to these things late, but I do, and then I catch up like a maniac and watch them 3 or 4 a night to get up to date.  I’m currently doing the same now with Breaking Bad and I’m hooked.

OK so which one is more like the current virtualization channel – the Jersey mafia guys or the crystal meth cookers and distributors.  Does VMware have its customers hooked on their vTax or are they more like the mob loan sharks coming back for their interest payments, or “vig”, and keeping their “customers” forever in their debt until they have to turn their businesses over?   Continue reading

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×