IT Brief NZ - What CIOs need to know about the changing face of technology

Warning: This story was published more than a year ago.
cloud.png

What CIOs need to know about the changing face of technology

The IT sector is ever changing with analyst predicting a massive transition in the coming year. The Internet of Things, Big Data, private and public cloud will all play a big role in 2015.  

Jay Kidd, senior vice president and chief technology officer at Net App, says every part of the IT stack is in transition. “In my 35 years in IT, I have never seen so much simultaneous change in technology," he says.

“End user devices, networks, application design, virtual server software, physical server design, storage systems, and even storage media… Some of these transitions are well underway and will accelerate in 2015 while others are just starting to emerge.”

Kidd explains there are six major transitions that need to be considered this year.

1.    Two Mythical Beasts – Internet of Things and Big Data Analytics – Will Produce Corporeal Children
Kidd says the rise of integrated telemetry in industrial equipment, health monitoring devices, mobile payment systems, along with a host of new sensors measuring the world, will provide the relevant data fuel for the next wave of business relevant analytics.

“Companies that had found their existing datasets were not sufficient to yield real insight can now correlate with real-world datasets to optimise business processes and change their customers’ experience..” he says.

“Acquisition management of data from connected things coupled with real-time and background analytics tools will change how companies touch the world.”

2.    The Future of All-Flash Arrays is Not All Flash
“Flash is transformative to the future of enterprise storage,” Kidd states. “But the idea of an all-flash datacenter is utter nonsense, and at least 80% of data will continue to reside on disks.

“Cost matters, and the least expensive SSDs will likely be 10 times more expensive than the least expensive SATA disks through the end of the decade,” he says. “Compression and deduplication apply to both disk and flash equally. Every storage architecture will incorporate flash to serve the ‘hot’ data. 

“However, those that choose to only include flash, and have no integration with other hybrid flash/disk arrays, will be the hot rod in the garage of IT. Fun to tinker with, but not the reliable storage workhorse IT needs.”

3.    A Multi-Vendor Hybrid Cloud is the Only Hybrid Cloud that Will Matter
Kidd says every customer is using cloud in some form. “Just as most customers were reluctant to bet on a single vendor for their on-premise IT, they will choose to work with multiple cloud providers,” he says. “Avoidance of lock-in, leverage in negotiations, or simply a desire for choice will drive them to seek a hybrid cloud that does not lock them in to any single provider.

Kidd says SaaS vendors who offer no way to extract data will suffer, and PaaS layers that only run in a single cloud will see less usage.

“Software technologies that can be deployed on premise and in a range of clouds will find favour with customers thinking strategically about their model for IT,” he says.

4.    Software Defined Storage Will Build a Bridge Between Public and Private Clouds
Software Defined Storage (SDS), with the ability to be deployed on different hardware and supporting rich automation capabilities, will extend its reach into cloud deployments and build a data fabric that spans premise and public clouds, Kidd says.

“SDS will provide a means for applications to access data uniformly across clouds and simplifies the data management aspects of moving existing applications to the cloud.“

He says, “SDS for object storage will bridge on-premise and cloud object repositories. The storage efficiencies in some software-defined storage offerings, such as Cloud ONTAP, also reduce the cost of moving data to and from the public cloud, and storing active data in the public cloud for long periods of time.”

5.    Docker Replaces Hypervisors as the Container of Choice for Scale-Out Applications 
“As new applications for SaaS or large-scale enterprise use cases are written using the scale-out microservices model, Docker application containers have proven to be more resource efficient than VMs with a complete OS,” Kidd explains.

“All major VM orchestration systems now support Docker and we will see the emergence of a robust ecosystem for data management and other surrounding services in 2015.”

6.    Hyper-Converged Infrastructure is the New Compute Server
Kidd says, “Hyper-converged Infrastructure products are becoming the new compute server with Direct-Attached Storage (DAS). 
“Traditional data center compute consists of blades or boxes in racks that have dedicated CPUs, memory, I/O and network connections, and run dozens of VMs.”

He says, “HCI such as VMware’s EVO allows local DAS to be shared across a few servers, making the unit of compute more resilient, while broadly shared data is accessed over the LAN or SAN. 

“Starting in 2015, the emergence of solid state storage, broader adoption of remote direct memory access (RDMA) network protocols, and new interconnects will drive a compute model where the cores, memory, and IOPs storage will be integrated in a low-latency fabric that will make them behave as a single rack-scale system.”

Interested in this topic?
We can put you in touch with an expert.

Follow Us

Featured

next-story-thumb Scroll down to read: