Tag Archives: virtualization

Technology Becomes Obsolete. Saving Does Not.

The list of technological innovations in IT that have already passed into obsoletion is long. You might recall some not so ancient technologies like the floppy disk, dot matrix printers, ZIP drives, the FAT file system, and cream-colored computer enclosures. Undoubtedly these are still being used somewhere by someone but I hope not in your data center. No, the rest of us have moved on. Technologies always fade and get replaced by newer, better technologies. Saving money, on the other hand, never goes out of style.

You see, when IT pros like you buy IT assets, you have to assume that the technology you are buying is going to be replaced in some number of years. Not replaced because it no longer operates. It gets replaced because it is no longer being manufactured or supported and has been replaced by newer, better, faster gear. This is IT. We accept this.

The real question here is, are you spending too much money on the gear you are buying now when it is going to be replaced in a few years anyway? For decades, the answer is mostly yes, and there are a two reasons why. Over-provisioning and complexity.

Over-Provisioning

When you are buying an IT solution, you know you are going to keep that solution for a minimum of 3-5 years before it gets replaced. Therefore you must attempt to forecast your needs for 3-5 year out. This is practically impossible but you try. Rather than risk under-provisioning, you over-provision to prevent yourself from having to upgrade or scale out.  The process of acquiring new gear is difficult. There is budget approval, research, more guesstimating future needs, implementation, and risk of unforeseen disasters.

But why is scaling out so difficult? Traditional IT architectures involve multiple vendors providing different components like servers, storage, hypervisors, disaster recovery, and more. There are many moving parts that might break when a new component is added into the mix. Software licensing may need to be upgraded to a higher, more expensive tier with infrastructure growth.  You don’t want to have to worry about running out of CPU, RAM, storage, or any other compute resource because you don’t want to have to deal with upgrading or scaling out what you already have. It is too complex.

Complexity

Ok, I just explained how IT infrastructure can be complex with so many vendors and components. It can be downright fragile when it comes to introducing change. Complexity bites you when it comes to operational expenses as well. It requires more expertise, more training, and tasks become more time consuming. And what about feature complexity? Are you spending too much on features that you don’t need? I know I am guilty of this in a lot of ways.

I own an iPhone. It has all kinds of features I don’t use. For example, I don’t use Bluetooth. I just don’t use external devices with my phone very often. But the feature is there and I paid for it.  There are a bunch of apps and feature on my phone I will likely never use, but all of those contributed to the price I paid for the phone, whether I use them or not.

I also own quite a few tools at home that I may have only used once. Was it worth it to buy them and then hardly ever use them? There is the old saying, “It is better to have it and not need it than to need it and not have it.” There is some truth to that and maybe that is why I still own those tools.  But unlike IT technologies, these tools may well be useful 10, 20, even 30 years from now.

How much do you figure you could be overspending on features and functionality you may never use in some of the IT solutions you buy? Just because a solution is loaded with features and functionality does not necessarily mean it is the best solution for you. It probably just means it costs more. Maybe it also comes with a brand name that costs more. Are you really getting the right solution?

There is a Better Way

So you over-provision. You likely spend a lot to have resources and functionality that you may or may not ever use. Of course you need some overhead for normal operations, but you never really know how much you will need. Or you accidently under-provision and end up spending too much upgrading and scaling out. Stop! There are better options.

If you haven’t noticed lately, traditional Capex expenditures on IT infrastructure are under scrutiny and Opex is becoming more favorable. Pay-as-you-go models like cloud computing are gaining traction as a way to prevent over-provisioning expense. Still, cloud can be extremely costly especially if costs are not managed well. When you have nearly unlimited resources in an elastic cloud, it can be easy to overprovision resources you don’t need, and end up paying for them when no one is paying attention.

Hyperconverged Infrastructure (HCI) is another option. Designed to be both simple to operate and to scale out, HCI lets you use just the resources you need and gives you the ability to scale out quickly and easily when needed. HCI combines servers, storage, virtualization, and even disaster recovery into a single appliance. Those appliances can then be clustered to pool resources, provide high availability, and become easy to scale out.

HC3, from Scale Computing, is unique amongst HCI solution in allowing HCI appliances to be mixed and matched within the same cluster. This means you have great flexibility in adding just the resources you need whether it be more compute power like CPU and RAM, or more storage. It also helps future proof your infrastructure by letting you add newer, bigger, faster appliances to a cluster while retiring or repurposing older appliances. It creates an IT infrastructure that can be easily and seamlessly scaled without having to rip and replace for future needs.

The bottom line is that you can save a lot of money by avoiding complexity and over-provisioning. Why waste valuable revenue on total cost of ownership (TCO) that is too high. At Scale Computing, we can help you analyze your TCO and figure out if there is a better way for you to be operating your IT infrastructure to lower costs. Let us know if you are ready to start saving. www.scalecomputing.com

Virtualization Made Easy

Twenty years ago, everything in IT was hard. Installing a server was hard. Setting up a database was hard. Networking machines was hard. Companies that wanted computers to do pretty much anything beyond basic printing needed a lot of expertise, time and effort and, let’s be realistic, even printing wasn’t all that easy in a lot of cases.

Today, many things are different. Networking is very easy. Installing a server is very easy. Setting up a database, easy. The basics are really not that hard.

Your virtualization should be easy today, too. We are really past the point where virtualization should be a challenge for small businesses to set up and use. Businesses spending time and resources trying to learn details about their hypervisors, examining different storage systems, talking to many vendors, researching tools and software becomes a very expensive exercise that ultimately is highly error prone due to a lack of experience and resources since most companies will only do this once to make a single, long term decisions. The cost of making the purchasing decision might be extremely high.

But we don’t need things to be like this today. Oh sure, in a very large company where extremely special needs these decisions make sense. In a company like that, we would expect that there is a team of virtualization and storage experts who research and work with many different products and vendors full time and are not making one time decisions, but instead doing so frequently. For them, this approach makes sense as it allows them to fine tune their purchasing decisions for different use cases.

For the rest of us in the smaller business market, whether a very small company of just a few people to even relatively large ones with many larger servers and hundreds or maybe thousands of employees, there really is no value to such a complicated purchasing process. The cost of that decision making it high, and the risks of making mistakes are high.

This is where hyperconvergence comes in. Hyperconvergence has the potential to take many elements that are often challenging to the non-enterprise IT market such as hypervisor selection, storage design, high availability and so forth and rolls them into a single, supported entity with the big, hard decisions already having not just made, but already implemented.

Hyperconvergence removes the guesswork and the expensive decision-making from IT and instead makes it simple and fast. Even additional management tools, like backups, are often prequalified and tested so that a smaller, vendor assured list is common.

Not only does choosing and implementing a business architecture become vastly simpler, but long term support does as well. Instead of many vendors and internal design decisions, a single vendor with standard designs means that you know who to call for support and they understand your system and how to support it.

The assumption that everything will be hard no longer needs to be true, even if it is hard for some IT pros to believe. Hyperconvergence applies the concept of ease-of-use to the core infrastructure components of your network.

Behind the Scenes: Architecting HC3

Like any other solution vendor, at Scale Computing we are often asked what makes our solution unique. In answer to that query, let’s talk about some of the technical foundation and internal architecture of HC3 and our approach to hyperconvergence.

The Whole Enchilada

With HC3, we own the entire software stack which includes storage, virtualization, backup/DR, and management. Owning the stack is important because it means we have no technology barriers based on access to other vendor technologies to develop the solution. This allows us to build the storage system, hypervisor, backup/DR tools, and management tools that work together in the best way possible.

Storage

At the heart of HC3 is our SCRIBE storage management system. This is a complete storage system developed and built in house specifically for use in HC3. Using a storage striping model similar to RAID 10, SCRIBE stripes storage across every disk of every node in a cluster. All storage in the cluster is always part of a single cluster-wide storage pool, requiring no manual configuration. New storage added to the cluster is automatically added to the storage pool. The only aspect of storage that the administrator manages is creation of virtual disks for VMs.

The ease of use of HC3 storage is not even the best part. What is really worth talking about is how the virtual disks for VMs on HC3 are accessing storage blocks from SCRIBE as if it were direct attached storage to be consumed on a physical server–with no layered storage protocols. There is no iSCSI, no NFS, no SMB or CIFS, no VMFS, or any other protocol or file system. There is also no need in SCRIBE for any virtual storage appliance (VSA) VMs that are notorious resource hogs. The file system laid down by the guest OS in the VM is the only file system in the stack because SCRIBE is not a file system; SCRIBE is a block engine. The absence of these storage protocols that would exist between VMs and virtual disks in other virtualization systems means the I/O paths in HC3 are greatly simplified and thus more efficient.

Without our ownership of both the storage and hypervisor by creating our own SCRIBE storage management system there is no storage layer that would have allowed us to achieve this level of efficient integration with the hypervisor.

Hypervisor

Luckily we did not need to completely reinvent virtualization, but were able to base our own HyperCore hypervisor on industry-trusted, open-source KVM. Having complete control over our KVM-based hypervisor not only allowed us to tightly embed the storage with the hypervisor, but also allowed us to implement our own set of hypervisor features to complete the solution.

One of the ways we were able to improve upon existing standard virtualization features was through our thin cloning capability. We were able to take the advantages of linked cloning which was a common feature of virtualization in other hypervisors, but eliminate the disadvantages of the parent/child dependency. Our thin clones are just as efficient as linked clones but are not vulnerable to issues of dependency with parent VMs.

Ownership of the hypervisor allows us to continue to develop new, more advanced virtualization features as well as giving us complete control over management and security of the solution. One of the most beneficial ways hypervisor ownership has benefited our HC3 customers is in our ability to build in backup and disaster recovery features.

Backup/DR

Even more important than our storage efficiency and development ease, our ownership of the hypervisor and storage allows us to implement a variety of backup and replication capabilities to provide a comprehensive disaster recovery solution built into HC3. Efficient, snapshot-based backup and replication is native to all HC3 VMs and allows us to provide our own hosted DRaaS solution for HC3 customers without requiring any additional software.

Our snapshot-based backup/replication comes with a simple, yet very flexible, scheduling mechanism for intervals as small as every 5 minutes. This provides a very low RPO for DR. We were also able to leverage our thin cloning technology to provide quick and easy failover with an equally efficient change-only restore and failback. We are finding more and more of our customers looking to HC3 to replace their legacy third-party backup and DR solutions.

Management

By owning the storage, hypervisor, and backup/DR software, HC3 is able to have a single, unified, web-based management interface for the entire stack. All day-to-day management tasks can be performed from this single interface. The only other interface ever needed is a command line accessed directly on each node for initial cluster configuration during deployment.

The ownership and integration of the entire stack allows for a simple view of both physical and virtual objects within an HC3 system and at-a-glance monitoring. Real-time statistics for disk utilization, CPU utilization, RAM utilization, and IOPS allow administrators to quickly identify resource related issues as they are occurring. Setting up backups and replication and performing failover and failback is also built right into the interface.

Summary

Ownership of the entire software stack from the storage to the hypervisor to the features and management allows Scale Computing to fully focus on efficiency and ease of use. We would not be able to have the same levels of streamlined efficiency, automation, and simplicity by trying to integrate third party solutions.

The simplicity, scalability, and availability of HC3 happen because our talented development team has the freedom to reimagine how infrastructure should be done, avoiding inefficiencies found in other vendor solutions that have been dragged along from pre-virtualization technology.

3-Node Minimum? Not So Fast

For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration.  Why now?

Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

Screen Shot 2016-07-18 at 2.06.52 PM

In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

Replication

Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

Screenshot 2016-07-13 09.34.07

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

Scale Computing Keeps Storage Simple and Efficient

Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.

At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex.  In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?

Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.

Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs.  These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.

VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads.  To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.

The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability.  This pooling also allows for complete flexibility of storage usage across all nodes.  The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.

To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O.  By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.

We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest.  We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.

Screenshot 2016-04-19 13.07.06

The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.

Screenshot 2016-07-13 09.34.07

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

SMB IT Challenges

There was a recent article that focused on the benefits that city, state and local governments have gained from implementing HyperConvergence (Side Note: for anyone interested in joining, it was brought to my attention on a new HyperConvergence group on LinkedIn where such articles are being posted and discussed).  The benefits cited in the article were:

  • Ease of management,
  • Fault tolerance,
  • Redundancy, and late in the article…
  • Scalability.

I’m sure it isn’t surprising given our core messaging around Scale’s HC3 (Simplicity, High Availability and Scalability), but I agree wholeheartedly with the assessment.

It occurred to me that the writer literally could have picked any industry and the same story could have been told.  When the IT Director from Cochise County, AZ says:

“I’ve seen an uptick in hardware failures that are directly related to our aging servers”,

It could just as easily have been the Director of IT at the manufacturing company down the street.  Or when the City of Brighton, Colorado’s Assistant Director of IT is quoted as saying,

“The demand (for storage and compute resources) kept growing and IT had to grow along with it”,

That could have come out of the mouth of just about any of the customers I talk to each week. Continue reading

The Next-Generation Server Room

There was a recent article on Network Computing regarding the Next Generation Data Center that got me thinking about our SMB target customer and the next generation server room.  Both the enterprise and the SMB face the influx of traffic growth described in the article (clearly at different levels, but an influx none-the-less).  So how will the SMB cope?  How will an IT organization with the limited resources of time and money react?  By focusing on Simplicity in the infrastructure.

Elimination of Legacy Storage Protocols through Hypervisor Convergence

There is an ongoing trend to virtualize workloads in the SMB that traditionally meant adding a SAN or a NAS to provide shared storage for high availability.  With the introduction of Hypervisor Converged architectures through products like Scale’s HC3, that requirement no longer exists.  In this model, end users can take advantage of the benefits of high availability without the complexity that comes with legacy storage protocols like iSCSI or NFS.  Not only does this reduce the management overhead of the shared storage, it also simplifies the vendor support model dramatically.  In the event of an issue, a single vendor can be called for support with no ability to place the blame on another component in the stack.

Simplicity in Scaling

Moore’s Law continues to hold as better, faster and cheaper equipment becomes available year after year.  By implementing a scale-out architecture in the infrastructure, IT organizations can take advantage of this by purchasing what they need today knowing that they can purchase equipment at tomorrow’s prices to scale-out the resources when the need arises.  Combined with the ability to mix and match hardware types in a hypervisor converged model also means that users have granularity in their scaling to match the requirements of the workloads at that time (such as adding a storage-only node in HC3 to a compute cluster to scale out only the storage resources). Continue reading

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – DIY Architecture (Part 1 of 4)

Converging the hardware and software components needed in an SMB virtualization deployment is a hot trend in the industry.  Terms like “converged infrastructure”, “hyper-convergence”, “hypervisor convergence” and “software-defined (fill in the blank)” have all emerged alongside the trend and just as quickly as they were defined, most have lost their meaning from both overuse and misuse.

In this series of blog posts, we will attempt to re-establish these definitions within the framework of the Convergence Continuum below:

Infrastructure Convergence Continuum

 Before we address convergence though, let’s set the stage by describing the traditional model of creating a virtualization environment with high availability.

Build Your Own / DIY

This is typically made up of VMware or Hyper-V plus brand name servers (Dell, HP, IBM, etc.) acting as hosts and a SAN or NAS (EMC VNXe, Dell Equallogic, HP Lefthand, NetApp, etc.) networked together to provide redundancy.  The DIY architecture is tried and true and effectively offers all of the benefits of single server virtualization such as partitioning, isolation, encapsulation and hardware independence along with High Availability of VMs, when architected correctly.  An example architecture might look like:

DIY Virtualization Architecture
DIY Virtualization Architecture

The downside to this approach is that it is complex to implement and manage.  Each layer in the stack adds an added management requirement (virtualization management, SAN/NAS management and Networking Management) as well as an additional vendor in the support environment, which often leads to finger pointing without strict adherence to the hardware compatibility list of each company.  This complexity is a burden for those who implement a DIY environment, as it often requires specialized training in one or more of the layers involved.  The IT generalist in the mid-market targeted by Scale Computing often relies on a Value Added Reseller to implement and help manage such a solution, which adds to the overall cost of implementing and maintaining.

Monolithic Storage – Single Point of Failure

The architecture above relies on multiple servers and hypervisors having the ability to share a common storage system, which makes that system a critical single point of failure for the entire infrastructure. This is commonly referred to in the industry as 3-2-1 architecture with 1 representing the single shared storage system that all servers and VM’s depend on (also called the inverted pyramid of doom).  While “scale-out” storage systems have been available to distribute storage processing and redundancy across multiple independent “nodes”, the hardware cost and additional networking required for scale out storage architectures originally restricted these solutions to very selected applications.

Down the Path of Convergence

Now that we have the basics of the DIY Architecture down, we can now continue down the path of convergence to Reference Architectures and Converged Solutions, which we will define in our next post.  Stay tuned for more!