IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Wed, 27th Jun 2012
FYI, this story is more than a year old

Network monitoring and security tools are struggling to keep pace at current network speeds between 1GBps and 10GBps, and as organisations step up to optic-enabled speeds of 40GBps to 100 GBps, chaos looms.

Even a small increase in existing network speeds and capacity can prove difficult for the tools currently in use - they will simply be unable to function as speeds accelerate towards a hundred times more than today’s speeds. Growing demands in speed and scale, combined with the ability to adapt to emerging technologies and strategic objectives, are causing organisations to look at new ways to monitor and secure their networks.

But help is available. A visibility fabric with flow mapping technology directs only the required data to a monitoring or security tool, ensuring that the right tool or tools receive that data. In this way, a monitoring device installed on a 10Gbps network can handle speeds of up to 100Gbps, helping enterprises to transition towards the next evolution of network communications with relative ease.

Monitoring next generation networks has become a focus of concern for many businesses, with security threats and regulatory compliance issues leading the list.

But as network speeds accelerate toward 40Gbps and eventually 100Gbps, monitoring has become increasingly difficult. Tools that are accurate for a 10Gbps link just cannot handle the data flow inherent in a 40Gbps link or higher. And when monitoring becomes unreliable, business critical revenue-producing elements are compromised.

Without accurate monitoring, network security is also potentially at risk as data leaks and cyber attacks present a constant threat. Network bottlenecks and outages can be prevented by continuous monitoring of bandwidth usage, while analysis and regular data audits give organisations the information they need to adhere to compliance regulations.

The complexity of monitoring also increases as more network architectures adopt emerging technologies such as virtualisation and cloud computing. Traffic can flow asymmetrically through a network, missing traditional monitoring tools altogether or at least making it harder to track. While networks become larger, faster and infinitely more complex, budgets often tighten and are spread in different directions—this confluence of factors makes network monitoring that much more difficult.

The early method of monitoring by directly attaching tools into links or into each mirror/SPAN port on every switch has proven costly and produces ‘tunnel vision’ where the tools see only a portion of the traffic. With this method, a device can also become quickly ‘oversubscribed’ when it receives more traffic than it can handle, leading to inaccurate results. And each time a tool needs replacing, it involves a costly and time-consuming structure plan to avoid compromising the production network.

The next evolution in network monitoring addresses these issues. The traffic visibility fabric is a layer that sits between network monitoring tools and the network itself, managing the amount of traffic that flows to the monitoring tools. This traffic aggregation method scales from just a few connections to over a thousand, allowing traffic to be monitored and secured from a centralised network tool farm.

This not only lowers the need for capital expenditure, but reduces operating expenditures as well. Because the tools can be managed in an out-of-band network, organisations can be agile and responsive to today’s dynamic environment without impacting the production network. Troubleshooting and security issues can be resolved quickly.

However, traffic aggregation and simple filtering alone do not offer a full solution. Problems still arise when monitoring tools receive unwanted packets from the filtered and aggregated packets. For example, a VoIP analyser would receive all the network traffic, not just the VoIP traffic it needed to see. This problem scales upwards as speeds increase.

Flow mapping technology, the next innovation in network monitoring, is an advanced filtering architecture designed to include or exclude traffic on connections between network ports and tool ports. Users decide which traffic should be sent to the monitoring tool, and how it should be handled when it arrives. In this way, tools can operate more efficiently, are easier to manage, and are still reliable and relevant at vastly increased network speeds.

Flow mapping technology combines ingress and egress port traffic filters with up to 13 unique user-selected criteria and ‘maps’ it to one or more ports, allowing delivery of discrete traffic to the exact location or locations specified by the user.

By taking filtering to new levels, Flow Mapping enables an organisation to maintain the reliability, security and integrity of their network as communication technology advances at an unprecedented rate.

By Paul Hooper, vice president of marketing for Gigamon.