All posts by David Demlow

VP Product Management @ScaleComputing, former CTO @ Double-Take storage, replication, recovery, cloud ,virtualization twitter: @daviddemlow

Hypervisor Converged – Definitions Diverged

I read a recent blog post from VMware’s Chuck Hollis that attempted to define and compare the terms Converged Infrastructure vs. Hyper-Converged and Hypervisor-Converged Infrastructure.

As is usually the case, I thought Chuck did a great job cutting through vendor marketing to the key points and differences, beginning with his quote that “Simple is the new killer app in enterprise IT infrastructure.”  And we would echo that is even more true in the mid-sized and smaller IT shops that we focus on.

But eventually vendor biases do surface. Especially in an organization like VMware with many kinds of partners and a parent company with a hardware centric legacy. That’s not a bad thing, and I’m sure I will show my vendor bias as well.  At Scale Computing we focus exclusively on mid-size to small IT shops and are therefore biased to view the world of technology through their eyes.

As background, we would consider our HC3 products to be “hypervisor converged infrastructure” as our storage functionality is built in to the hypervisor which are both run in a single software OS kernel. I would consider hypervisor-converged to be a specific case or sub-set of hyper-converged, which simply means you have removed an external storage device and moved the physical disks into your compute hosts which also run the hypervisor code. (how the hypervisor gets to those disks is the difference in manageability, performance, complexity – see Craig Theriac’s blog series – The Infrastructure Convergence Continuum ) Continue reading

HC3 vs VMware vs. Hyper-V for SMBs: part 7 – Rules to Live By

To summarize and conclude this series, I want to offer some “rules” or at least “guidelines” that I believe small and mid-sized businesses should consider when planning for their virtual infrastructure.

First – ensure your design provides data and compute redundancy across multiple “boxes” wherever possible.  And Second –  you can provide for both with a single “mirror” copy of your data simply by putting your “RAID” mirror on a remote server.  These both seem rather obvious but it’s amazing how often these simple principles are ignored … often by putting all the storage inside a single shared SAN “box” that becomes a single point of failure, or creating costly over-replication … using both local RAID with storage overhead to protect against a disk failure and in addition creating additional replicas on different hosts to protect against server failures, often resulting in 4 or more copies of all your data.

Next, be very cautious purchasing “de-featured” or “entry-bundled” software.  Many vendors of software only solutions will have multiple “editions” and bundles that may seem attractively priced until you need to expand and add a new server or activate some new feature they figured you ultimately will need.  Seemingly “expected” things can instantly double your licensing costs such as adding one additional server to a cluster exceeding a “bundle” or wanting to use one “enterprise feature” (perhaps like storage live migration after you’ve maxed out a non-scale out SAN controller and need to move data between multiple SANs).

This may be controversial but I suggest you save admin training days / $’s for applications that create real business value for your company – not basic infrastructure and IT plumbing.  I believe that will benefit you as an IT professional.  Along that line, the level of typical training required as well as install / configuration time of a solution is a good representation of complexity and cost of ongoing maintenance you should expect – something that takes 10x longer to setup and configure is going to take at least 10x more time to maintain, patch and troubleshoot.  Quite honestly maybe more than 10x because installation is a lot more “standard” than keeping a complex, changing, interdependent environment up and running.

Here is one that few will argue yet many ignore – don’t buy today what you think you will need in 3 years.   I understand that many architectures don’t lend themselves to expansion …or perhaps expansion requires using “today’s” CPU model or architecture for compatibility and there are concerns about future availability and costs.  The best strategy is to select an architecture that avoids that (hint, hint) and even then buy just enough to cover your ability to predict your needs.  Three years is way to much for anyone, 6 months to a year may be reasonable for most and fit normal purchasing / budgeting cycles.

Lastly – simplicity is good.  Dealing with fewer vendors that offer standardized modular configurations is way better than assembling the very best but totally customized mouse trap you can.  Not only should you have peace of mind knowing that Scale gives you one place to call, but when you do call we know your exact configuration.   We test it with every software change we make and provide you updates for the entire software stack in one step.

I hope this series has been beneficial and would love to hear any additional “rules” you would like to suggest in the comments.

 

HC3 vs VMware vs. Hyper-V for SMBs: Part 6 – Isn’t Microsoft THE SMB solution?

In the last post of this series we discussed the various software components to purchase, install, manage, patch in a Hyper-V infrastructure.  Now I want discuss the all important data storage options.

As is the case with VMware, for automatic high availability failover among Hyper-V hosts you need some form of shared storage system and storage network, most commonly an iSCSI SAN with redundant controller hardware RAID, multi-path iSCSI networking etc.   Until recently, shared block storage  (SAN) was the only option but Microsoft does now support shared file storage when using their newest SMB 3.0 (server message block) storage protocol available in their most recent server OS’s.  To use this you would basically have to install a pair of recent Windows servers with SMB3 support, in a cluster of their own, sitting in front of a SAN or shared SAS storage so this is definitely NOT about simplification.  With respect to the complexity of external shared storage for Hyper-V, everything I’ve said about VMware in previous posts of this series applies as well, a separate console to manage and monitor storage, multi-step storage provisioning starting at the array creating targets and luns, connecting to those on every host, initializing disks, formatting file systems, etc.  And just like with VMware, you are introducing multiple independent components with different software and firmware versions that have to certified together for interoperability (see the Windows Server Catalog), patches have to be coordinated carefully and of course when there are problems you may need to involve not only Microsoft but the server and storage vendors as well. Continue reading

HC3 vs VMware vs. Hyper-V for SMBs: Part 5 – Same Challenges, Different Name

This series will now turn towards Microsoft and their SMB virtualization solution which is composed of the Hyper-V hypervisor component, used in conjunction with Windows Failover Clustering plus System Center Virtual Machine Manager as a “manager of managers” over the top of it all  (yes that is three different management console / admin tools already and we haven’t even gotten to shared storage)

But lets back up a little bit first.  My career as a technologist began back in the Novell heyday however I was early to join the Windows NT bandwagon and focused on Windows server technologies throughout the majority of my career.  I always explored and stayed up on other technology alternatives but the majority of time concluded that even if Microsoft didn’t have the “best” technology or the most features, it was or quickly would become “good enough.”  Working for a number of software developers, virtualization was a godsend for testing and customer demonstrations so I had used very early versions of VMware Workstation, Microsoft Virtual PC, you name it.  Needless to say I followed Microsoft’s path into server virtualization with great interest and have continued to watch their battle with VMware and open source virtualization technologies since the early Hyper-V betas.  Quite honestly, for quite a while, I wanted Hyper-V to work so I put up without live migration at first, put up with one VM per drive letter (prior to the introduction of cluster shared volumes which introduce many oddities of their own), I put up with configuring “Failover Clustering” completely outside the Hyper-V manager UI.  I figured eventually Microsoft would “get it right” and build a highly available yet easy to manage virtualization solution that the typical SMB jack of all trades IT administrator could easily manage without weeks of training and that “just works”…  Maybe they got distracted by Azure and “the cloud” or perhaps Apple or Google but if you compare Microsoft virtualization to HC3, they have a very long way to go. Continue reading

HC3 vs VMware vs. Hyper-V for SMBs: Part 4 – A day in the life of a VMware Administrator

At Scale, we have done a lot already to highlight the painfully obvious differences in CAPEX and initial deployment costs between a fully integrated HC3 system and a VMware + Servers + SAN infrastructure.  In this post we want to begin a discussion about what happens after that … how does hyperconvergence help companies and their IT administrators provide better IT services by spending less time babysitting their infrastructure.

So let’s begin…

HC3 was designed to be self managing and self correcting, providing simple “at a glance” confirmation of the systems operational state, overall resource utilization and clear identification of any items that require administrator attention.   HC3 is constantly checking the operational state of all hardware and software components of the system, with the intelligence built in to take automated corrective action and alert the administrator only when attention is required. Continue reading

HC3 Move Demo – Easy P2V and V2V migration powered by Double-Take

Many customers leverage the move to HC3 as an opportunity to upgrade antiquated operating systems and applications to the latest and greatest. For those workloads that do not need to be refreshed, HC3 Move powered by Double-Take can be leveraged to migrate your existing environment onto HC3 with minimal downtime or effort.

The following video shows a time compressed video of the entire migration process moving a Windows based web / application server from a VMware vSphere VM into HC3

HC3 vs VMware vs. Hyper-V for SMBs: Part 3 – VMware and the VTAX

VMware clearly is the 800 lb gorilla in the virtualization hypervisor space given their early entry into the market and foothold in large enterprises.  However, since leveraging most of the “advanced” capabilities that VMware provides such as live migration (vmotion) and VM HA failover have required users to purchase and manage external shared storage systems (SAN, NAS or use special virtual storage appliance VMs which I’ll discuss later,) the cost and complexity of a typical VMware based resilient infrastructure has been well beyond what the typical mid market to SMB IT shop could or should spend.  Factoring in the additional administrative requirements and costs such as training and certification brings the TCO (total cost of ownership) even higher.

VMware has attempted to compensate by creating more affordable SMB “starter bundles” that can provide reduced price licensing to entice end users with just a few servers to start.  However, when those customers later expand or update their environment by adding additional compute capacity or when they desire additional features the customers are faced with a large licensing cost increase as they “graduate” to VMware standard or enterprise pricing structures which can double or triple the licensing cost for those initial servers.

But lets start by reviewing some of  the software components you need to license from VMware. Continue reading

HC3 vs VMware vs. Hyper-V for SMBs : Part 2

I’ve been hanging out on Spiceworks a lot lately which is a great online community for SMB and mid market IT personnel to share ideas, ask questions and gather opinions.  In fact, the idea for creating this blog series came in large part from seeing the common and repeated questions, concerns and discussions happening on Spiceworks as IT organizations began researching virtualization and/or private cloud.

One common theme I’ve noticed is the widely held notion that the only next “jump” from a simple environment using single server virtualization is to go to a complex cluster of servers for redundancy and shared storage (SAN or NAS).  Then, in order for that storage to not become a new single point of failure there has to be not only storage redundancy (RAID) but storage path redundancy (MPIO, NIC bonding, etc.), and controller redundancy and in some cases even the recognition that multiple storage arrays with synchronous mirroring may be needed to mitigate concerns about the SAN creating a new single point of failure for the entire infrastructure… a complex and expensive vicious circle. Continue reading

HC3 vs VMware vs. Hyper-V for SMBs : Part 1

There are plenty of articles, reviews, blogs and lab reports available that provide various comparisons of different software, hardware and architectural options for leveraging the benefits of server and storage virtualization.

I’m going to try to tackle the subject through the eyes of a “typical” IT director or manager at a small to mid size business (SMB)  … the kind of user that we see a lot of here at Scale Computing, particularly since the launch of our HC3 completely integrated virtualization system that integrates high availability virtualization and storage technologies together into a single easy to manage system. Continue reading

VDI and HC3 – Virtual … Desktop … Infrastructure part 3 of 3

We’ve looked at the two major options for delivering server based desktop virtualization: VM based VDI and session virtualization aka terminal services.  One aspect that is inherently different between the two is how applications are installed onto the OS and how they are presented to users.

In a VM based desktop, because there is a single user that owns that VM you can treat it just like a physical desktop if you choose.  In most small companies I’ve worked for, that means I am a machine administrator so if I want to install some application that is important to doing my job (or something personal like a game), I can buy it and do it myself and no one else is impacted.  In a terminal services environment, multiple users not only share the same operating system but their sessions can typically be started on any terminal server in a “farm” of multiple servers. Continue reading

Buy Now Form Submission

Please complete the form below and someone from Scale Computing will be in contact with you within 24hrs.

×