IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
SwiftStack: How to move a petabyte of data to the cloud
Wed, 16th Aug 2017
FYI, this story is more than a year old

SwiftStack is a cloud storage provider for organisations that require universal access to petabytes of unstructured data in a single namespace.

The cloud company explains that with the increasing adoption of public cloud infrastructure amongst enterprises, companies that need to shift hundreds of terabytes or petabytes of data find this migration more complex and disruptive than anticipated.

SwiftStack breaks down the transition to the cloud into four steps, explaining how enterprises can shift even petabyte-scale data to cloud environments.

“Drift and Shift” to cloud-native storage

By definition, data that is not yet in the cloud is stored in silos, each with specific data access protocols.

This data is extremely complicated to “lift and shift” to the public cloud. Instead, a “drift and shift” strategy is more practical, shifting storage to a cloud-native format that uses on-premises storage.

Data remains where it is today, so this step is both low cost and low risk, and can be done over time. The business benefits of cloud storage can be achieved on premises, and the data will be ready to move to public cloud when the time is right.

Automate operations

Data management software with built-in automation that operates based on policies set and controlled by IT makes it possible for even a single administrator to manage a multi-petabyte hybrid cloud infrastructure in a global organization.

Define the service objectives for protection, synchronisation, location, access, and capacity usage and let the software control the placement of data and its delivery to applications. Users consume storage with the right policies for their applications.

As the business demands evolve, so can the policies controlled by IT.

Stay flexible

All key public cloud providers (Amazon, Google, Microsoft, and Rackspace) use object storage platforms for long-term retention and governance of the end user's data.

While object is their default setting, there are enough differences under the hood, and enough proprietary technology, that moving a petabyte, or even part of a petabyte, from one provider to another may be intolerable

With data management across all locations and clouds, cross-cloud platform compatibility offers the flexibility that architects are looking to build into their infrastructure. This ensures IT stays in full control, allowing universal management regardless of location, and prevents provider lock-in.

Metadata mastery

Due to technical limitations, legacy storage like SAN and NAS systems were just not built with metadata in mind. Cloud-native storage retains metadata with the object data, rather than in a separate database only its own application can read.

Cloud storage – whether public cloud, cloud-native on-premises, or a combination – is an ideal medium in which to take advantage of metadata. Harnessing, organizing, and analyzing metadata associated with petabytes of business data would have been unthinkable just a few years ago.

“Pricing based on consumption, elastic scalability, improved collaboration, and other key advantages of the public cloud are attainable goals, but those with large data volumes must be mindful of their unique environment,” says Joe Arnold, SwiftStack president and chief product officer.

“Fortunately, these organizations will also find that the right cloud data management tactics and tools will unleash more value from that data, and respond as business needs and workloads evolve.