IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

How fatter cloud is driving flatter networks

Wed, 1st Feb 2012
FYI, this story is more than a year old

The notion of a network fabric for the cloud has been gaining momentum. What's driving this notion is that virtual machines(VMs) are shifting from server to server, creating an increase in east-west traffic patterns in the cloud. Traditional multi-tiered network architectures are not optimised for these traffic patterns. Rather, they have served better for north-south traffic, and even there the increased latency and oversubscription with multiple tiers is affecting application performance. Additionally, the increasing adoption of Ethernet-based storage is driving the need for more predictable performance and lower latency. This is driving to the notion of a 'flatter' fabric-oriented architecture, where the architecture has two components. The first is a network architecture that simply has fewer tiers, where servers, virtual machines and storage are connected to each other over, at most, one or two network hops. This leads to lower latency, higher bandwidth and more predictable performance. The second component is the ability to utilise all available bandwidth through multiple active paths. Traditional networks utilised spanning tree protocol to resolve network loops, which in turn resulted in blocking multiple links. The use of technologies such as Multi System Link Aggregation allows dual homing devices in the network for resiliency, utilising the bandwidth available on both those links. Newer technologies such as TRILL and Shortest Path Bridging (SPB) also allow utilising multiple active links, but require newer equipment to support the new packet formats required by the TRILL or SPB. Additionally, traditional network and storage traffic has been built on different fabrics such as fibre channel and Ethernet. Even where iSCSI or NFS type technology is used, many vendors would deploy a separate Ethernet-based storage network for performance and predictability, adding to the deployment costs in terms of additional network interface cards (NICs), cabling, etc. With the availability of data center bridging (DCB) technology, the ability to truly converge on a common Ethernet fabric becomes a reality. DCB allows separation of traffic on a common Ethernet fabric into multiple traffic classes, with the ability to flow control and assign bandwidth parameters to the individual traffic classes.  For example,  storage traffic such as iSCSI can have its own traffic class and be assigned its own bandwidth guarantees as well as being flow controlled independently of other traffic on that same Ethernet fabric. This results in storage traffic being isolated from the effects of busy data traffic, leading to more predictable storage performance.  In effect, this allows flatter fabric architectures to utilise a common Ethernet fabric for both data and storage, thereby reducing costs and simplifying deployment challenges. Finally with virtualisation ratios increasing, the need for bandwidth at the server network edge is constantly increasing. Running storage on a common Ethernet fabric is also leading to higher bandwidth requirements at the initiator or target. All this is driving towards a high density 10GbE network edge, with the network core moving to 40GbE. In other words, 40GbE core technology will form the foundation of a high capacity, low latency, and converged Ethernet fabric along with 10GbE at the server-network edge. A key element of this architecture is the fabric's virtualisation awareness. The ability of the fabric to provide insight, control and automation into the virtual machine life-cycle is a key element of this new Ethernet fabric. As an example, any provisioning within the network to better meet SLAs associated with virtual machines (VMs), needs to support 'Follow the VM' type models – where the network policies automatically track and follow the VM as it is moved from server to server. Other constructs such as virtual port-based networks provisioning, and network based VM analytics, are all key elements of this new cloud fabric architecture.  In summary, the increase in east-west traffic, the increased need for bandwidth at the network server edge due to increased virtualisation rations, and the move towards running storage and data on a common Ethernet network are all driving towards a high performance, low latency, virtualisation aware, converged Ethernet Fabric Network Architecture for the cloud.  * Shehzad Merchant is Vice President of Technology for Extreme Networks, where he drives strategy and technology direction for advanced networking including LANs and data centres.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X