Tag Archives: infrastructure

Scale with Increased Capacity

2016 has been a remarkable year for Scale Computing and one of our biggest achievements was the release of the HC1150 appliance. The HC1150 significantly boosted the power and capacity of our HC1000 series and featured hybrid flash storage at a very affordable price. As a result, the HC1150 is our most popular HC3 model but, of course, we couldn’t stop there.

First, we have begun offering 8TB drives in our HC1000 series appliances to increase the maximum storage capacity by almost double (or actually double it on the HC1100). Data sets are ever increasing in size and this increase in storage capacity means you can grow capacity even faster and more affordably, one node at a time. The unique ability of HC3 to mix and match nodes of varying capacity (and across hardware generations!) means your storage can grow as needed each time you expand your cluster.

Secondly, we have introduced a new HC1150D appliance for pre-sale which doubles the CPU capacity with a second physical processor. CPU can often be a performance bottleneck in scaling out the number of VMs supported. With this increase in CPU capacity, the HC1150D scales out an HC3 cluster to support more compute power across a greater number of VMs. The HC1150D also doubles available RAM configuration up to 512GB per appliance.

Below is a preview of the new configuration ranges and starting pricing for the HC1000 series, including the HC1150D.

Screenshot 2016-12-15 09.07.43

Scale Computing is committed to giving our customers the best virtualization infrastructure on the market and we will keep integrating greater capacity and computing power into our HC3 appliances. Our focus of simplicity, scalability, and availability will continue to drive our innovation to make IT infrastructure more affordable for you. Look for more announcements to come.

Screenshot 2016-07-13 09.34.07

What do DDOS Attacks Mean for Cloud Users?

Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

level3_outage_oct2016_downdetector_800b

This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime.  What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

Screenshot 2016-07-13 09.34.07

IT Infrastructure: Deploy. Integrate. Repeat.

Have you ever wondered if you are stuck in an IT infrastructure loop, continuously deploying the same types of components and integrating them into an overall infrastructure architecture? Servers for CPU and RAM, storage appliances, hypervisor software, and disaster recovery software/appliances are just some of the different components that you’ve put together from different vendors to create your IT infrastructure.mobius

This model of infrastructure design, combining components from different vendors, has been around for at least a couple decades. Virtualization has reduced the hardware footprint but it added one more component, the hypervisor, to the overall mix. As these component technologies like compute and storage have evolved within the rise of virtualization, they have been modified to function but have not necessarily been optimized for efficiency.

Take storage for example.  SANs were an obvious fit for virtualization early on. However, layers of inefficient storage protocols and virtual storage appliances that combined the SAN with virtualization were not efficient. If not for SSD storage, the performance of these systems would be unacceptable at best. But IT continues to implement these architectures because it has been done this way for so long, regardless of the inherent inefficiencies. Luckily, the next generation of infrastructure has arrived in the form of hyperconvergence to break this routine.

Hyperconverged infrastructure (HCI) combines compute, storage, virtualization, and even disaster recovery into a single appliance that can be clustered for high availability.  No more purchasing all of the components separately from different vendors, no more making sure all of the components are compatible, and no more dealing with support and maintenance from multiple vendors on different schedules.

Not all HCI systems are equal, though, as some still adhere to some separate components.  Some use third party hypervisors that require separate licensing. Some still adhere to SAN architectures that require virtual storage appliances (VSAs) or other inefficient storage architectures requiring excessive resource overhead and require SSD storage for caching to overcome inefficiencies.

Not only does HCI reduce vendor management and complexity but when done correctly,  embeds storage  in the hypervisor and offers it as a direct attached, block access storage system to VM workloads. This significantly improves the I/O performance of storage for virtualization. This architecture provides excellent performance for spinning disk so when SSD is added as a second storage tier, the storage performance is greatly improved.  Also, because the storage is including in the appliance, it eliminates managing a separate SAN appliance.

HCI goes even further in simplifying IT infrastructure to allow management of the whole system from a single interface.  Because the architecture is managed as a single unit and prevalidated, there is no effort spent making sure the various components work together. When the system is truly hyperconverged, including the hypervisor, there is greater control in automation so that software and firmware updates can be done without disruption to running VMs.  And for scaling out, new appliances can be added to a cluster without disruption as well.

The result of these simplifications and improvements of infrastructure in hyperconvergence is an infrastructure that can be deployed quickly, scaled easily, and that requires little management. It embodies many of the benefits of the cloud where the infrastructure is virtually transparent.  Instead of spending time managing infrastructure, administrators can focus time managing apps and processes rather than hardware and infrastructure.

Infrastructure should no longer require certified storage experts, virtualization experts, or any kind of hardware experts. Administrators should no longer need entire weekends or month-long projects to deploy and integrate infrastructure or spend sleepless nights dealing with failures. Hyperconvergence breaks the cycle of infrastructure as a variety of different vendors and components. Instead, it makes infrastructure a simple, available, and trusted commodity.

Screenshot 2016-07-13 09.34.07

What is Hyperconvergence?

Hyperconvergence is a term that sort of crept up on the market and has since stuck. It’s used to describe products like our HC3.  But what does hyperconvergence actually mean?

Active blogger and technologist Stevie Chambers wrote a well-thought article in which he defined hyperconvergence as an extension of the overall convergence trend, collapsing the datacenter into an appliance form factor. This is certainly true of the solutions that are available today. However, I believe he missed a key point (perhaps intentionally, as Stevie was in the CTO group at VCE when that blog was written). Continue reading

VIDEO: The Twelve Networking Truths – RFC 1925 Truth No. 5

RFC 1925 Truth No. 5: “It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.”

So to paraphrase truth No. 5, complexity is a bad thing.  However, this is exactly how virtualized infrastructures are built today. Continue reading