Tag Archives: infrastructure

Technology Becomes Obsolete. Saving Does Not.

The list of technological innovations in IT that have already passed into obsoletion is long. You might recall some not so ancient technologies like the floppy disk, dot matrix printers, ZIP drives, the FAT file system, and cream-colored computer enclosures. Undoubtedly these are still being used somewhere by someone but I hope not in your data center. No, the rest of us have moved on. Technologies always fade and get replaced by newer, better technologies. Saving money, on the other hand, never goes out of style.

You see, when IT pros like you buy IT assets, you have to assume that the technology you are buying is going to be replaced in some number of years. Not replaced because it no longer operates. It gets replaced because it is no longer being manufactured or supported and has been replaced by newer, better, faster gear. This is IT. We accept this.

The real question here is, are you spending too much money on the gear you are buying now when it is going to be replaced in a few years anyway? For decades, the answer is mostly yes, and there are a two reasons why. Over-provisioning and complexity.

Over-Provisioning

When you are buying an IT solution, you know you are going to keep that solution for a minimum of 3-5 years before it gets replaced. Therefore you must attempt to forecast your needs for 3-5 year out. This is practically impossible but you try. Rather than risk under-provisioning, you over-provision to prevent yourself from having to upgrade or scale out.  The process of acquiring new gear is difficult. There is budget approval, research, more guesstimating future needs, implementation, and risk of unforeseen disasters.

But why is scaling out so difficult? Traditional IT architectures involve multiple vendors providing different components like servers, storage, hypervisors, disaster recovery, and more. There are many moving parts that might break when a new component is added into the mix. Software licensing may need to be upgraded to a higher, more expensive tier with infrastructure growth.  You don’t want to have to worry about running out of CPU, RAM, storage, or any other compute resource because you don’t want to have to deal with upgrading or scaling out what you already have. It is too complex.

Complexity

Ok, I just explained how IT infrastructure can be complex with so many vendors and components. It can be downright fragile when it comes to introducing change. Complexity bites you when it comes to operational expenses as well. It requires more expertise, more training, and tasks become more time consuming. And what about feature complexity? Are you spending too much on features that you don’t need? I know I am guilty of this in a lot of ways.

I own an iPhone. It has all kinds of features I don’t use. For example, I don’t use Bluetooth. I just don’t use external devices with my phone very often. But the feature is there and I paid for it.  There are a bunch of apps and feature on my phone I will likely never use, but all of those contributed to the price I paid for the phone, whether I use them or not.

I also own quite a few tools at home that I may have only used once. Was it worth it to buy them and then hardly ever use them? There is the old saying, “It is better to have it and not need it than to need it and not have it.” There is some truth to that and maybe that is why I still own those tools.  But unlike IT technologies, these tools may well be useful 10, 20, even 30 years from now.

How much do you figure you could be overspending on features and functionality you may never use in some of the IT solutions you buy? Just because a solution is loaded with features and functionality does not necessarily mean it is the best solution for you. It probably just means it costs more. Maybe it also comes with a brand name that costs more. Are you really getting the right solution?

There is a Better Way

So you over-provision. You likely spend a lot to have resources and functionality that you may or may not ever use. Of course you need some overhead for normal operations, but you never really know how much you will need. Or you accidently under-provision and end up spending too much upgrading and scaling out. Stop! There are better options.

If you haven’t noticed lately, traditional Capex expenditures on IT infrastructure are under scrutiny and Opex is becoming more favorable. Pay-as-you-go models like cloud computing are gaining traction as a way to prevent over-provisioning expense. Still, cloud can be extremely costly especially if costs are not managed well. When you have nearly unlimited resources in an elastic cloud, it can be easy to overprovision resources you don’t need, and end up paying for them when no one is paying attention.

Hyperconverged Infrastructure (HCI) is another option. Designed to be both simple to operate and to scale out, HCI lets you use just the resources you need and gives you the ability to scale out quickly and easily when needed. HCI combines servers, storage, virtualization, and even disaster recovery into a single appliance. Those appliances can then be clustered to pool resources, provide high availability, and become easy to scale out.

HC3, from Scale Computing, is unique amongst HCI solution in allowing HCI appliances to be mixed and matched within the same cluster. This means you have great flexibility in adding just the resources you need whether it be more compute power like CPU and RAM, or more storage. It also helps future proof your infrastructure by letting you add newer, bigger, faster appliances to a cluster while retiring or repurposing older appliances. It creates an IT infrastructure that can be easily and seamlessly scaled without having to rip and replace for future needs.

The bottom line is that you can save a lot of money by avoiding complexity and over-provisioning. Why waste valuable revenue on total cost of ownership (TCO) that is too high. At Scale Computing, we can help you analyze your TCO and figure out if there is a better way for you to be operating your IT infrastructure to lower costs. Let us know if you are ready to start saving. www.scalecomputing.com

IT Refresh and My Microwave Oven

This past weekend I replaced my over-the-range microwave oven. While the process of replacing it was pretty unremarkable, it was the process that led me to replace it and the result that were interesting. It got me thinking about the process by which IT groups ultimately choose to refresh infrastructure and solutions.

Let me explain what happened with my old microwave oven.

Event #1 – About 3 years ago or so, the front handle of the microwave broke off. I’m not sure how it happened, my sister and two of my nieces were living with me at the time, but it broke off pretty completely. No big deal. It was not hard to grab the door from underneath and open it and push it closed. It was a minor inconvenience. I wasn’t interested in replacing it.

Event #2 – Around 6 months to a year after the handle broke, the sensor or mechanism on the door that determined whether the door was closed started failing intermittently. When you closed the door, the microwave might or might not start. You might have to open and shut the door multiple times before it started. Annoying. Did the broken door handle and the way we were now opening the door contribute to this fault? Unknown. It was annoying but the microwave still worked. Another level of inconvenience but I was willing to live with it.

Event #3 – Add 6 more months and the carousel failed. It started failing on and off but finally failed completely. Again, the microwave still “worked” in that it emitted microwaves and heated food but now the food needed to be rotated every 15 seconds or so to prevent hotspots. Of course, the fact that I had to open and close the door to rotate the food only made the problem of the failing door sensor more acute. It was becoming pretty inconvenient to use. But it still worked.

That should have been the last straw, right? Nope. Of course, I thought about replacing it. It was somewhere on my to-do list, but by then I had been slowly acclimating myself to the inconvenience and finding workarounds. Workarounds included things like using the conventional oven more and eating out more often. More leftovers were left to spoil in the fridge. I was modifying my behavior to adjust to the inadequacies of the microwave.

Event #4 – My sister and nieces had moved out a year ago or so, and now my girlfriend had moved in. She didn’t demand I replace the microwave or anything. There was no nagging. There was no pressure. But I wanted to replace it because I wanted her to have a reliable microwave oven. So, I finally replaced it.

My old microwave, “Old Unreliable,” pictured above, was a Frigidaire microwave. I am not knocking Frigidaire in any way. It served me well for many years before this journey to replacement. I have many other Frigidaire appliances I’m still using today.

Why did I wait so long? It was not terribly expensive to replace nor difficult. With “Old Unreliable”, I was costing myself time and money by letting good leftovers go to waste and being predisposed to eating at restaurants because I was inconvenienced by the microwave. I haven’t tried to calculate it but I am sure I racked up restaurant bills over the course of avoiding the old microwave that exceeded the cost of the new microwave, by a lot. All those tasty leftovers gone to waste…

I believe this overall scenario happens pretty regularly in IT. Admins and users have to deal with solutions that are inconvenient to use, prone to failure, and that incur secondary costs in excess management and maintenance.

IT Admins are expected to be able to engineer some workarounds when needed, but the more workarounds needed, the more expertise and knowledge needed, which can become costly. Consider also that constantly working around clunky implementations does not usually lead to efficient productivity or innovation. As with my microwave journey, there is a point where it starts costing more to keep the existing solution rather than investing in a new solution. Those costs are sometimes subtle and grow over time, and like a frog in a pot of water, we don’t always notice when things are heating up.

How much could be gained in productivity, cost saving, and user satisfaction by investing in a new solution?  “If it ain’t broke, don’t fix it,” can only take you so far, and does not foster innovation and growth. Rather than becoming comfortable with an inadequate solution and workarounds, consider what improvements could be made with newer technology.

Scale with Increased Capacity

2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.

First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.

Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.

Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.

Screenshot 2016-12-15 09.07.43

Scale Computing is committed to giving our customers the best virtualization infrastructure on the market and we will keep integrating greater capacity and computing power into our HC3 appliances. Our focus of simplicity, scalability, and availability will continue to drive our innovation to make IT infrastructure more affordable for you. Look for more announcements to come.

Screenshot 2016-07-13 09.34.07

What do DDOS Attacks Mean for Cloud Users?

Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

level3_outage_oct2016_downdetector_800b

This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime.  What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

Screenshot 2016-07-13 09.34.07

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

What is Hyperconvergence?

Hyperconvergence is a term that sort of crept up on the market and has since stuck. It’s used to describe products like our HC3.  But what does hyperconvergence actually mean?

Active blogger and technologist Stevie Chambers wrote a well-thought article in which he defined hyperconvergence as an extension of the overall convergence trend, collapsing the datacenter into an appliance form factor. This is certainly true of the solutions that are available today. However, I believe he missed a key point (perhaps intentionally, as Stevie was in the CTO group at VCE when that blog was written). Continue reading

VIDEO: The Twelve Networking Truths – RFC 1925 Truth No. 5

RFC 1925 Truth No. 5: “It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.”

So to paraphrase truth No. 5, complexity is a bad thing.  However, this is exactly how virtualized infrastructures are built today. Continue reading