IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

The cloud community faces industrial scale challenges...

Wed, 5th Mar 2014
FYI, this story is more than a year old

No one group has all the expertise needed to create a perfect cloud environment – it needs expertise in application behaviour, data center design, network and machine virtualization, wide area networking and so much more.

Getting this all working together calls for collaboration, and the CloudEthernet Forum is the way to do it, says CEF President, James Walker

Sometimes society needs to pause, waiting for one single word or symbol that encapsulates its fundamental need, before moving on. Like a seed crystal dropped into a super-saturated solution, the word “cloud” has become the rallying cry for a range of “as a service” business models now spreading like wildfire.

And yet the basic idea – that it makes economic sense to invest heavily in central resources and save money on cheaper access to those resources – has been around since the days of the mainframe computer.

The personal computer almost destroyed that model, as people discovered that they really liked holding their own resources, but it re-emerged with client-server and the savings made possible by allowing a “thinner client”.

While the rise of the web revived the idea of centralisation, with a truly thin client accessing services from the Internet, early moves in this direction gathered little momentum. This was partly because Internet access in the 1990s was still too slow, but also because people still cherished the autonomy of having all their assets in their own PC.

What really shifted this caution was the arrival of the smartphone: a new type of thin client appeared that seemed to hold the whole world in its Internet grasp. People did not have to shift perspective and embrace the SaaS model, they just found they were already using it, and the word of this new aeon was “cloud”. The result has been a surge in cloud uptake that took even its strongest advocates by surprise.

The signs are everywhere, as massive new data centers are springing up in the coldest places: Dell’Oro Group predicts that within five years more than 75% of Ethernet ports will be sold into data centers, with similar predictions for compute and storage gear from Gartner and Forrester.

So the total worldwide market for cloud infrastructure and services is expected to grow to $206B in 2016, and the cloud will be the hub for most business investments well into the next decade.

The early adopters are those who accept this total virtualisation process and are quite happy to shift it to other platforms. But, as we move towards a mass cloud market, the industry is beginning to reach people still wedded to the merits of autonomy.

These include a sense of independence with more palpable SLAs, of security (or at least more manageable risk), control over data integrity in the face of increasing legislation and so on.

This means that the industry will soon be facing a much steeper sales incline – and this is just when it can least afford to slip. If the cloud fails now, it could send the whole market tumbling back down the slope.

The bad news is that the cracks in the cloud structure have already started to show. The good news is that this has been recognised in time and the industry has launched the CloudEthernet Forum and is already rallying to tackle fundamental issues and ensure a reliable, scalable and secure cloud for the coming generation.

A more detailed analysis of the challenges and suggested steps to their resolution is available in the CEF white paper The Benefits and Challenges Ahead for Cloud Operators . There are, however, two main factors that need first to be understood to provide context for the technical challenges.

The first is scale. It is understood that the market is rapidly expanding, even more rapidly than expected – but this is a familiar challenge in the IT world with lots of new users coming on line. What is different is the explosion in virtual machines that is unbounded by the physical limitations usually imposed by the requirement to install hardware.

In a virtualised environment every VM is equivalent to a new location added to the network and, even in a low-density data center we could be speaking of many tens of thousands of such “locations”. Already we hear of new giant data centers hosting over a million VMs: string a few of these together and we will very soon be addressing tens of millions of new network locations.

Ethernet has, quite rightly so far, proved itself as the optimal technology for these data centers, but it is worth remembering that it is based on a concept designed in the 1970s to string a few computers in the Palo Alto research centre together so they could share a printer. It has developed over 3 decades by adding switches to extend the service from tens to many thousands of locations.

This is a natural evolution in response to growing demand. But the coming VM explosion is way beyond natural, and today’s switch designs simply don’t have the memory to hold tables for tens of millions of locations. And a move to create new generation “super-switches” would go against the basic economics that makes Ethernet so suitable.

Doesn’t SDN point the way to a solution – keeping the switches simple and centralising this massive routing burden onto the network controller? It’s an attractive idea and may well be a part of the solution, but it is not really what SDN is fundamentally about. The real attraction of SDN is to use central control as a basis to deliver smart new functionality and flexibility to the overall network by virtualising it and creating a more nimble communications infrastructure.

Forcing an additional massive “heavy lifting” administrative burden onto the controller in this way shifts the emphasis from software-defined towards software-relieved – reducing what could have been a breakthrough into a sticking plaster solution. NFV, similarly, may have a role to play, but its immediate effect would be to increase the number of functions running on VMs.

If we are to find new ways to streamline the process, reducing the grunt work rather than moving it all to the control layer, then it will need a fundamental rethink: it will need a cloudEthernet.

The second big issue centres on collaboration: the problems are different when you begin linking remote data centers. Yes, it also increases scale, but the real challenge is bringing together mature disciplines with already established boundaries: the people who build data centers and design applications are not WAN experts, just as telcos have much to learn about the needs of applications and data center architecture.

An enterprise cloud solution typically brings together at least four major players in addition to Network Equipment Manufacturers: data center experts, WAN service providers, cloud service providers, usually some exchange provider like Equinix, Telx or CoreSite who may be hosting ten thousand logically discrete tenants in a single data center.

These are big worlds needing to find common ground or a connecting bridge. If that does not happen, then any failure in cloud delivery will widen the rift as each discipline starts blaming the others for any system failure.

Collaboration is the key. Before the cloud’s Ethernet foundations start to show its cracks, we need the whole industry to work together to reinforce those foundations.

There are already giant players in this game: in 2012 AWS, Google and Microsoft accounted for 40% of all the Ethernet ports shipped worldwide. While that gives some idea of their massive investment, the total being less than 50% also tells us that not one of these giants is yet big enough to dominate the scene and dictate its own cloud connectivity 'standards' for global usage. So standards need to be created before the market fragments.

Taking a familiar example: the outstanding success of Carrier Ethernet happened because vendors collaborated to create and certify global standards in the name of MEF – rather than battling each other to see whose technology could take the lead.

The users could buy certified services and equipment without having to waste time choosing technologies, the service providers and vendors made faster sales, and world business gained by the acceleration of high performance, lower cost WAN services brought about by Carrier Ethernet.

A similar level of collaboration by cloud stakeholders is needed now. The CEF is gathering expertise in application behaviour, data center design, security, network and machine virtualization, wide area networking and so much more.

Those who join are collaborating to build a firm foundation for tomorrow’s cloud – a cloudEthernet meeting the needs of scalability, determinism, availability at the speed of VMs being made and torn down. Those who stand aside may find themselves delivering services on a creaking platform, pointing the finger of blame at everyone but themselves.

by James Walker, President of The CloudEthernet Forum

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X