Story image

How fatter cloud is driving flatter networks

01 Feb 2012

The notion of a network fabric for the cloud has been gaining momentum. What’s driving this notion is that virtual machines(VMs) are shifting from server to server, creating an increase in east-west traffic patterns in the cloud. Traditional multi-tiered network architectures are not optimised for these traffic patterns. Rather, they have served better for north-south traffic, and even there the increased latency and oversubscription with multiple tiers is affecting application performance. Additionally, the increasing adoption of Ethernet-based storage is driving the need for more predictable performance and lower latency. This is driving to the notion of a ‘flatter’ fabric-oriented architecture, where the architecture has two components. The first is a network architecture that simply has fewer tiers, where servers, virtual machines and storage are connected to each other over, at most, one or two network hops. This leads to lower latency, higher bandwidth and more predictable performance. The second component is the ability to utilise all available bandwidth through multiple active paths. Traditional networks utilised spanning tree protocol to resolve network loops, which in turn resulted in blocking multiple links. The use of technologies such as Multi System Link Aggregation allows dual homing devices in the network for resiliency, utilising the bandwidth available on both those links. Newer technologies such as TRILL and Shortest Path Bridging (SPB) also allow utilising multiple active links, but require newer equipment to support the new packet formats required by the TRILL or SPB. Additionally, traditional network and storage traffic has been built on different fabrics such as fibre channel and Ethernet. Even where iSCSI or NFS type technology is used, many vendors would deploy a separate Ethernet-based storage network for performance and predictability, adding to the deployment costs in terms of additional network interface cards (NICs), cabling, etc. With the availability of data centre bridging (DCB) technology, the ability to truly converge on a common Ethernet fabric becomes a reality. DCB allows separation of traffic on a common Ethernet fabric into multiple traffic classes, with the ability to flow control and assign bandwidth parameters to the individual traffic classes.  For example,  storage traffic such as iSCSI can have its own traffic class and be assigned its own bandwidth guarantees as well as being flow controlled independently of other traffic on that same Ethernet fabric. This results in storage traffic being isolated from the effects of busy data traffic, leading to more predictable storage performance.  In effect, this allows flatter fabric architectures to utilise a common Ethernet fabric for both data and storage, thereby reducing costs and simplifying deployment challenges. Finally with virtualisation ratios increasing, the need for bandwidth at the server network edge is constantly increasing. Running storage on a common Ethernet fabric is also leading to higher bandwidth requirements at the initiator or target. All this is driving towards a high density 10GbE network edge, with the network core moving to 40GbE. In other words, 40GbE core technology will form the foundation of a high capacity, low latency, and converged Ethernet fabric along with 10GbE at the server-network edge. A key element of this architecture is the fabric’s virtualisation awareness. The ability of the fabric to provide insight, control and automation into the virtual machine life-cycle is a key element of this new Ethernet fabric. As an example, any provisioning within the network to better meet SLAs associated with virtual machines (VMs), needs to support ‘Follow the VM’ type models – where the network policies automatically track and follow the VM as it is moved from server to server. Other constructs such as virtual port-based networks provisioning, and network based VM analytics, are all key elements of this new cloud fabric architecture.  In summary, the increase in east-west traffic, the increased need for bandwidth at the network server edge due to increased virtualisation rations, and the move towards running storage and data on a common Ethernet network are all driving towards a high performance, low latency, virtualisation aware, converged Ethernet Fabric Network Architecture for the cloud.  * Shehzad Merchant is Vice President of Technology for Extreme Networks, where he drives strategy and technology direction for advanced networking including LANs and data centres.

NVIDIA announces enterprise servers optimised for data science
“The rapid adoption of T4 on the world’s most popular business servers signals the start of a new era in enterprise computing."
SUSE completes move to independence
“Current IT trends make it clear that open source has become more important in the enterprise than ever before."
Unencrypted Gearbest database leaves over 1.5mil shoppers’ records exposed
Depending on the countries and information requirements, the data could give hackers access to online government portals, banking apps, and health insurance records.
VoiP new-comer upgraded and ready to take on NZ
UFONE is an Auckland-based VoIP provider that has just completed a massive upgrade of its back-end and is ready to take on the market.
Online attackers abusing Kiwis' generosity in wake of Chch tragedy
It doesn’t take some people long to abuse people’s kindness and generosity in a time of mourning.
SAS announces US$1 billion investment in AI
"At SAS, we remain dedicated to our customers and their success, and this investment is another example of that commitment."
IntegrationWorks continues expansion with new Brisbane office
The company’s new office space at the Riverside Centre overlooks the Brisbane River and Storey Bridge.
DXC subsidiary takes SAP energy industry partner award
Winners of the awards were selected from SAP’s A/NZpartner ecosystem and announced at the recent SAP A/NZ Partner Kick-Off Meeting held in Sydney.