IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Clarity is key to cutting data centre downtime
Tue, 3rd Dec 2013
FYI, this story is more than a year old

The sheer scale and complexity of diverse technologies working together at incredible speeds in a data center often cause bottlenecks and outages.

Yet when IT professionals seek to monitor this complexity, the tools they choose often intensify the chaos.

Since a data center represents the life blood of any large organisation with its data providing critical services to internal and external users, downtime causes headaches for operators and their clients, and impacts performance against service level agreements (SLAs).

According to Gigamon, it is essential to monitor and analyse all traffic passing through the networks in order to ensure that a data center operates as efficiently as possible.

It is equally necessary to select monitoring tools that assist the process without increasing complexity. Yet, the most common approach is to implement networks with a myriad monitoring and security tools.

Having a monitoring tool connected to every critical data path is not an efficient practice, nor is it cost-effective.

Service providers often run into trouble when monitoring, and security tools become over-burdened by the sheer amount of data they are receiving. This is likely to lead to dropped packets and the potential loss of critical information, as well as service disruptions and unscheduled downtime.

When network operators begin upgrading to higher speed networks, monitoring and security tools are placed under further pressure to perform at levels where were not designed to operate.

Visibility is key

The key to creating improved monitoring capabilities lies in building a tool capable of filtering, aggregating, consolidating and replicating data to the monitoring and security tools already in place.

But since millions of traffic flows, thousands of events and hundreds of changes occur throughout the infrastructure each day, securing and monitoring all this activity effectively calls for a high degree of clarity. In fact it requires a solution that will deliver pervasive, intelligent and dynamic visibility.

The tools selected need to be capable of recognising when specific traffic is significant to more than just one management system, and must see over boundaries (of physical and virtual) in order to provide the clarity required to secure, maintain and support the whole data center.

The continual evolution of a data center network adds complexity to its management. New applications, services and tools are added frequently, creating a need for constant re-evaluation in the management of each. This often involves a level of downtime.

But when all the data that needs to be monitored is routed through a traffic visibility node, users can easily connect new tools or monitor new applications without disturbing existing monitoring connections.

The data is simply filtered automatically and routed to the most appropriate tool. With this technology in place, new tools can be added without any lengthy change management processes and there is no downtime.

By using filtering technology, a traffic visibility tool will virtually eliminate the problem of tools becoming overburdened. Filtering allows a user to reduce the amount of data being sent to a tool so that it receives only the data it needs to see, rather than processing vast amounts of unnecessary information.

This eliminates the issue of dropped packets and compromised analysis, and also improves longevity and associated return on investment of existing management and analysis tools.

As networks are upgraded to 10Gbps and higher, data centers will often operate under-utilised network links that require monitoring before they can to be used to their full capability, but lack the tools capable of this.

Through the use of traffic filtering, data passing through these large networks can be reduced to levels manageable by 1Gbps tools, significantly increasing visibility into the data. This also eliminates the need to purchase monitoring tools specifically for 10Gbps networks.

Employing a network traffic visibility solution, allows service providers to see exactly what is happening on the network at all times – from threats to performance issues – and maximise data center performance, while lowering the total cost of management.

Increased visibility allows operators to see what they would otherwise miss, limits downtime to a minimum and adheres to those all-important SLAs.