Tag Archives: high availability

What do DDOS Attacks Mean for Cloud Users?

Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

level3_outage_oct2016_downdetector_800b

This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime.  What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

Screenshot 2016-07-13 09.34.07

The King is Dead. Long Live the King!

With a title like Death by 1,000 cuts: Mainstream storage array supplies are bleeding, I couldn’t help but read Chris Mellor’s article on the decline of traditional storage arrays.  It starts off just as strong with:

Great beasts can be killed by a 1,000 cuts, bleeding to death from the myriad slashes in their bodies – none of which, on their own, is a killer. And this, it seems, is the way things are going for big-brand storage arrays, as upstarts slice away at the market…

And his reasons as to why are spot on from what we have seen in our target customer segment for HC3.

the classic storage array was under attack because it was becoming too limiting, complex and expensive for more and more use-cases.

Looking at our own use-case for HC3, storage array adoption for our target segment (the SMB) rose with the demand for virtualization, providing shared storage for things like live migration and failover of VMs.  It was a necessary evil to know that critical workloads weren’t going to go down for days or even weeks in the event of a hardware failure. Continue reading

SMB IT Challenges

There was a recent article that focused on the benefits that city, state and local governments have gained from implementing HyperConvergence (Side Note: for anyone interested in joining, it was brought to my attention on a new HyperConvergence group on LinkedIn where such articles are being posted and discussed).  The benefits cited in the article were:

  • Ease of management,
  • Fault tolerance,
  • Redundancy, and late in the article…
  • Scalability.

I’m sure it isn’t surprising given our core messaging around Scale’s HC3 (Simplicity, High Availability and Scalability), but I agree wholeheartedly with the assessment.

It occurred to me that the writer literally could have picked any industry and the same story could have been told.  When the IT Director from Cochise County, AZ says:

“I’ve seen an uptick in hardware failures that are directly related to our aging servers”,

It could just as easily have been the Director of IT at the manufacturing company down the street.  Or when the City of Brighton, Colorado’s Assistant Director of IT is quoted as saying,

“The demand (for storage and compute resources) kept growing and IT had to grow along with it”,

That could have come out of the mouth of just about any of the customers I talk to each week. Continue reading

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – Reference Architecture (Part 2 of 4)

Infrastructure Convergence Continuum
Infrastructure Convergence Continuum

 

In our last post on the Infrastructure Convergence Continuum, we focused on the Build Your Own / DIY Architecture for virtualization infrastructure.  There are architectural limitations with this implementation that we addressed in the first post (“the inverted pyramid of doom”) that may be worth reviewing as a baseline understanding for today’s post.  Why? Spoiler Alert: They share the same architecture as the Reference Architecture and Converged Architecture we’ll be covering in today’s post. Continue reading

What is Hypervisor Convergence: The Infrastructure Convergence Continuum Blog Series – DIY Architecture (Part 1 of 4)

Converging the hardware and software components needed in an SMB virtualization deployment is a hot trend in the industry.  Terms like “converged infrastructure”, “hyper-convergence”, “hypervisor convergence” and “software-defined (fill in the blank)” have all emerged alongside the trend and just as quickly as they were defined, most have lost their meaning from both overuse and misuse.

In this series of blog posts, we will attempt to re-establish these definitions within the framework of the Convergence Continuum below:

Infrastructure Convergence Continuum

 Before we address convergence though, let’s set the stage by describing the traditional model of creating a virtualization environment with high availability.

Build Your Own / DIY

This is typically made up of VMware or Hyper-V plus brand name servers (Dell, HP, IBM, etc.) acting as hosts and a SAN or NAS (EMC VNXe, Dell Equallogic, HP Lefthand, NetApp, etc.) networked together to provide redundancy.  The DIY architecture is tried and true and effectively offers all of the benefits of single server virtualization such as partitioning, isolation, encapsulation and hardware independence along with High Availability of VMs, when architected correctly.  An example architecture might look like:

DIY Virtualization Architecture
DIY Virtualization Architecture

The downside to this approach is that it is complex to implement and manage.  Each layer in the stack adds an added management requirement (virtualization management, SAN/NAS management and Networking Management) as well as an additional vendor in the support environment, which often leads to finger pointing without strict adherence to the hardware compatibility list of each company.  This complexity is a burden for those who implement a DIY environment, as it often requires specialized training in one or more of the layers involved.  The IT generalist in the mid-market targeted by Scale Computing often relies on a Value Added Reseller to implement and help manage such a solution, which adds to the overall cost of implementing and maintaining.

Monolithic Storage – Single Point of Failure

The architecture above relies on multiple servers and hypervisors having the ability to share a common storage system, which makes that system a critical single point of failure for the entire infrastructure. This is commonly referred to in the industry as 3-2-1 architecture with 1 representing the single shared storage system that all servers and VM’s depend on (also called the inverted pyramid of doom).  While “scale-out” storage systems have been available to distribute storage processing and redundancy across multiple independent “nodes”, the hardware cost and additional networking required for scale out storage architectures originally restricted these solutions to very selected applications.

Down the Path of Convergence

Now that we have the basics of the DIY Architecture down, we can now continue down the path of convergence to Reference Architectures and Converged Solutions, which we will define in our next post.  Stay tuned for more!

 

Virtualizing Microsoft Exchange on HC3

Virtualizing Microsoft Exchange is one of the primary use cases that we see for HC3 customers. A general move to virtualizing Exchange has gained traction as companies take the normal cycle of hardware refreshes and Operating System upgrades as an opportunity to consolidate servers in a virtualized environment.  These companies seek to take advantage of:

  • Better availability;
  • Flexibility in managing unplanned growth (both performance and capacity); and
  • Lower costs from better hardware utilization. Continue reading

A Move from VMware to HC3

Many of Scale’s HC3 customers are coming to us from a traditional Do-It-Yourself virtualization environment where they combined piecemeal parts including VMware’s hypervisor to create a complex solution that provides the high availability expected in their infrastructure.  Fed up with the complexity (or more often the vTax on a licensing renewal) associated with that setup, they eventually find HC3 as a solution to provide the simplicity, scalability and high availability needed at an affordable price.

I just returned from the Midmarket CIO Forum last week where 98% of the CIOs I spoke to had implemented some form of the VMware environment described above (the other 2% were Hyper-V, but the story of vTax still rang true!).  We met with 7 boardrooms full of CIOs who all reacted the same to the demo of HC3: “This sounds too good to be true!”  To which I like to reply, “Yeah, we get that a lot.” 🙂

After the initial shock of seeing HC3 for the first time, pragmatism inevitably takes over.  The questions then became, “How do I migrate from VMware to HC3?” or “How can I use HC3 alongside my existing VMware environment?”   I spent the majority of my week talking through the transition strategies we have seen from some of the 600+ HC3 customers when migrating from VMware to HC3 VMs (V2V process). Continue reading

Disaster Recovery and Backup Strategies for the SMB

When infrastructure (server or storage) fails in a traditional, physical environment, there is typically resulting downtime while a complex and lengthy recovery from backups is reconstituted.  In most cases, this requires time obtaining and setting up identical replacement hardware, then additional time to recover the operating system, applications and data from the backups. Continue reading

HC3x: Introducing Scale Computing’s all performance SAS product line

“Good news everyone!” HC3x has just been announced.  For the last few months, we have internally referred to this platform under the code name “MegaFonzie.”  Those of you familiar with Futurama probably know that Mega Fonzies are units used to determine how cool someone is (hence the picture of Professor Farnsworth) …and HC3x is off the charts!  If your response is, “Balderdash…I’ll be the judge of what’s cool” then grab your cool-o-meter and let’s walk through this new hardware together.

Survey: Cloud Computing Takes a Backseat to On-Site Virtualization [Infographic]

Today, Scale Computing released results of a market survey conducted by ApplicationContinuity.org. Sponsored by the developers of HC3, the report showcases why midmarket organizations are embracing on-premise virtualization over the cloud, the driving factors behind this decision, and what alternatives companies are choosing for their mission-critical applications and data. More than 3,000 IT professionals in the US participated in the recent survey, which shows that nine-out-of-ten midsize companies prefer to keep their critical applications and data local and that cost and complexity remain key concerns for both cloud and on-site virtualization. For a complete list of survey findings, download the free report by visiting: http://bit.ly/CloudTakesABackSeat.

 

[Infographic] Cloud Computing Takes a Backseat to On-Site Virtualization