IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Storage squeeze: Why 2016 is the year of consolidation in the storage industry
Thu, 28th Apr 2016
FYI, this story is more than a year old

Today's storage market is a tussle between old and new. While the technology has changed considerably in the enterprise storage market since the late nineties, the advent of both flash and VM-aware storage is further shaking up the landscape.

So how will the industry cope?

Storage through the years

When I started out, enterprise applications predominantly ran on centralised compute platforms. Skip ahead several years and the market had changed significantly; mid-range and micro (PC) based server systems were proliferating and most systems were using RAID (redundant array of inexpensive disks). At this time, the server was essentially purpose built with storage to meet the needs of the application.

In the late nineties and early 2000s, Storage Area Network (SAN) systems were introduced. These systems eased the management difficulties of decentralised servers and changed how enterprise storage was used and consumed. Shortly after this, Network Attached Storage (NAS) came onto the scene, providing storage to the masses and offering another option for shared server storage.

Having these systems meant the overall environment could be more efficient and still provide good throughput to client systems. If you required high speed data storage, the usual outcome was an entire rack filled with 73GB drives – the more disk spindles you had, the greater the performance. Hence for these systems, performance was expensive and capacity was cheap.

A new data center technology emerged around 2007 – virtualisation. Not only did this mean abstraction at the storage level, but also at the server level. As a result, the server design that was carefully planned for a SQL server in 1998 was no longer a simple matter. We needed to provide storage that was fast and large enough for multiple systems, and diagnose problems when one of the servers was misbehaving.

But the real turning point for storage was in 2010 when SSD and flash systems came on the market, and the traditional view of high-speed storage systems was flipped on its head – performance was cheap, capacity expensive.

Where's the market heading?

What we now have is a slew of new players in the enterprise storage marketplace and a revolution in SAN and NAS system architectures to support SSD. With performance as table stakes, it's management effort that's the current differentiator between vendors.

According to Tintri's annual State of Storage report, which surveyed hundreds of data center professionals globally, manageability is now acknowledged as the biggest storage pain - leapfrogging performance as the greatest thorn in a data center's side.

When asked what steps they were taking to address these challenges, 68 percent of data center professionals said they were evaluating new technologies and 48 percent were evaluating new storage vendors.

We are also witnessing growing momentum behind VAS (VM-aware storage) arrays. VAS is specifically designed to overcome the shortcomings of the highly abstracted environment we see for 90-plus percent of the server workloads in the enterprise that are virtualised. Indeed, Tintri's State of Storage report found that 52 percent of organisations are looking into boundary-pushing, virtualisation-specific VM-aware storage.

These macro trends are putting the squeeze on legacy storage providers that lack the agility to respond. Dell and EMC are merging into an even larger (and presumably less flexible) entity. NetApp has announced layoffs in the wake of declining product revenues. And other upstarts have struggled to find their footing amidst all the chop.

With all the change happening in the technology and in the market, now is the time to stay focused on the players who have the most compelling all-flash and VM-aware storage offerings. That's the way to avoid the tussle and keep storage simple.