Hyperconverged infrastructure: Don’t assume the network will just work
We live in a world where products and services need to be delivered now. It is unsustainable for enterprises to rely on traditional data centers to help them cater to that, especially because these legacy systems are costly to run and upgrade.
For these reasons, IT departments are seeking alternative architecture models that can be implemented to not only improve a company's productivity but also decrease its capital and operating expenses.
Increasingly, IT departments are turning to hyperconverged infrastructure (HCI) to do just that. According to the International Data Corporation's Worldwide Quarterly Converged Systems Tracker, full-year sales of hyperconverged systems surpassed US$3.7 billion in 2017, up 64.3% from 2016.
HCI, where the compute, storage, network and virtualisation are converged into a single solution, has become the preferred architecture. The introduction of HCI has fundamentally altered the enterprise storage landscape by offering increased performance, enhanced data services, lower setup costs and simplified operations.
Most importantly, the virtualisation of compute and storage means enterprises can now get better utilisation of processing and storage resources from each rack with HCI. Enterprises can also scale out incrementally with additional modules with auto-provisioning, into the compute and virtualisation clusters. The storage file system and the compute hypervisor, which are centrally managed, take care of this.
Ultimately, HCI ensures a business is able to concern themselves less with their infrastructure and focus on the speed of delivery and deployment times of their services.
However, when enterprises scale out HCI, networking is either overlooked or assumed to just work. Legacy network design and architectures do not support the HCI model very well and it is important to make conscious networking choices as HCI requirements grow.
What about the network?
Hyperconverged infrastructure designers tend to be highly focused on the server and storage benefits. What is often missed are the shifts in traffic patterns, network dependencies, network traffic volumes and network visibility that HCI changes.
HCI drives a highly distributed storage model, with a different file system, including store access, back-up, and recovery. This requires a unique set of networking requirements, far different from the common design principles for a block or file-based centralised storage systems, such as SAN and NAS. Plus, when you couple this with highly distributed virtual machine application hosting on the same HCI nodes and across the same network, all of sudden the network is very important.
The network has to scale and be resilient when it's handling more traffic with different traffic patterns – sometimes two to four times more than it previously did – especially when it's supporting various packet sizes, security zones, protocols, virtual and physical resource discovery, and more.
The network also needs to provide visibility and telemetry for enterprises within their HCI for troubleshooting, updating, patching, monitoring, analytics, and capacity planning.
Core to enabling the enterprise network to operate with such flexibility is by taking a software-defined approach. With a controller-based SDN architecture supporting a HCI environment through API integrations, the network can be dynamically and automatically reconfigured to support the workload needs without manual and error-prone intervention. This helps simplifies operations, provides full visibility and telemetry and enables rapid service chaining at the network level.
The highly dynamic workload-based model means enterprises can now better align IT resources to business priorities, and therefore increase agility and efficiency, as well as accelerate how quickly they deploy applications and draw business value from their data. Just remember: do not overlook networking requirements when scaling out your hyperconverged infrastructure.