IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
New availability and recovery capabilities announced for AWS database services
Thu, 30th Nov 2017
FYI, this story is more than a year old

Today at AWS re:Invent, Amazon Web Services (AWS) announced new database capabilities for Amazon Aurora and Amazon DynamoDB, and introduced Amazon Neptune, a new fully managed graph database service.

Amazon Aurora now includes the ability to scale out database reads and writes across multiple data centers for even higher performance and availability.

Amazon Aurora Serverless is a new deployment option that makes it easy and cost-effective to run applications with unpredictable or cyclical workloads by auto-scaling capacity with per-second billing.

With Global Tables, Amazon DynamoDB is now the first fully managed database service that provides true multi-master, multi-region read and writes, offering high-performance and low-latency for globally distributed applications and users.

Amazon Neptune is AWS's new fast, reliable, and fully managed graph database service that makes it easy for developers to build and run applications that work with highly connected datasets.

The days of the one-size-fits-all database are over.

For many years, the relational database was the only option available to application developers.

And, while relational databases are great for applications that log transactions and store up to terabytes of structured data, today's developers need a variety of databases to serve the needs of modern applications.

These applications need to store petabytes of unstructured data, access it with sub-millisecond latency, process millions of requests per second, and scale to support millions of users all around the world.

It's not only common for modern companies to use multiple database types across their various applications, but also to use multiple database types within a single application.

Since introducing Amazon Relational Database Service (Amazon RDS) in 2009, AWS has expanded its database offerings to provide customers the right database for the right job.

This includes the ability to run six relational database engines with Amazon RDS (including Amazon Aurora, a fully MySQL/PostgreSQL compatible database engine with at least as strong durability and availability as commercial grade databases but at 1/10th of the cost); a highly scalable and fully managed NoSQL database service with DynamoDB; and a fully managed in-memory data store and cache in Amazon ElastiCache.

Now, with the introduction of Amazon Neptune, developers can extend their applications to work with highly connected data such as social feeds, recommendations, drug discovery, or fraud detection.

Raju Gulabani, AWS databases, analytics, and machine learning vice president, says, “Nobody provides a better, more varied selection of databases than AWS, and it's part of why hundreds of thousands of customers have embraced AWS database services, with hundreds more migrating every day.

Amazon Aurora Multi-Master scales reads and writes across multiple data centers for applications with stringent performance and availability needs

Tens of thousands of customers are using Amazon Aurora because it delivers the performance and availability of the highest-grade commercial databases at a cost more commonly associated with open source, making it the fastest-growing service in AWS history. Amazon Aurora's scale-out architecture lets customers seamlessly add up to 15 low-latency read replicas across three Availability Zones (AZs), achieving millions of reads per second.

With its new Multi-Master capability, Amazon Aurora now supports multiple write master nodes across multiple Availability Zones (AZs).

Amazon Aurora Multi-Master is designed to allow applications to transparently tolerate failures of any master--or even a service level disruption in a single AZ—with zero application downtime and sub-second failovers.

This means customers can scale out performance and minimize downtime for applications with the most demanding throughput and availability requirements.

Amazon Aurora Multi-Master will add multi-region support for globally distributed database deployments in 2018.

Amazon Aurora Serverless provides database capacity that starts, scales, and shuts down with application workload

Many AWS customers have applications with unpredictable, intermittent, or cyclical usage patterns that may not need the power and performance of Amazon Aurora all of the time.

For example, dev/test environments run only a portion of each day, and blogs spike usage with new posts.

With Amazon Aurora Serverless, customers no longer have to provision or manage database capacity.

The database automatically starts, scales, and shuts down based on application workload.

Customers simply create an endpoint through the AWS Management Console, specify the minimum and maximum capacity needs of their application, and Amazon Aurora handles the rest.

Customers pay by the second for database capacity when the database is in use.

Zendesk builds software for better customer relationships.

It empowers organizations to improve customer engagement and better understand their customers.

Amazon DynamoDB adds multi-master, multi-region and backup/restore capabilities

Amazon DynamoDB is a fully managed, seamlessly scalable NoSQL database service.

More than a hundred thousand AWS customers use Amazon DynamoDB to deliver consistent, single-digit millisecond latency for some of the world's largest mobile, web, gaming, ad tech, and Internet of Things (IoT) applications.

As customers build geographically distributed applications, they find they need the same low latency and scalability for their users around the world.

With Global Tables, Amazon DynamoDB now supports multi-master capability across multiple regions.

This allows applications to perform low-latency reads and writes to local Amazon DynamoDB tables in the same region where the application is being used.

This means a consumer using a mobile app in North America experiences the same response times when they travel to Europe or Asia without requiring developers to add complex application logic.

Amazon DynamoDB Global Tables also provide redundancy across multiple regions, so databases remain available to the application even in the unlikely event of a service level disruption in a single AZ or single region.

Developers can set up Amazon DynamoDB Global Tables with just a few clicks in the AWS Management Console, simply selecting the regions where they want their tables to be replicated. Amazon DynamoDB handles the rest.

Customers also need a quick, easy, and cost-effective way to back up their Amazon DynamoDB tables – whether just a few gigabytes or hundreds of terabytes – for long-term archival and compliance, and for short-term retention and data protection.

With On-demand backup, Amazon DynamoDB customers can now instantly create full backups of their data in just one click, with no performance impact on their production applications.

And, Point in Time Restore (PITR) allows customers to restore their data up to the minute for the past 35 days, providing protection from data loss due to application errors.

On-demand backup is generally available today, with point-in-time restore coming in early 2018.

“Customers around the world use Amazon retail websites every day to shop online. To provide the best possible discovery, purchasing, and delivery experience to every customer no matter where they live, Amazon increasingly needs databases capable of millisecond read/write latency with data that's available globally,” says Dave Treadwell, Amazon.com Foundation eCommerce VP.

Customers can build powerful applications over highly connected data with Amazon Neptune

Many applications being built today need to understand and navigate relationships between highly connected data to enable use cases like social applications, recommendation engines, and fraud detection.

For example, a developer building a news feed into a social app will want the feed to prioritize showing users the latest updates from their family, from friends whose updates they “like” a lot, and from friends who live close to them.

Amazon Neptune efficiently stores and navigates highly connected data, allowing developers to create sophisticated, interactive graph applications that can query billions of relationships with millisecond latency.

Amazon Neptune's query processing engine is optimized for both of the leading graph models, Property Graph and W3C's Resource Description Framework (RDF), and their associated query languages, Apache TinkerPop Gremlin and RDF SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case.

Amazon Neptune storage scales automatically, with no downtime or performance degradation.

Amazon Neptune is highly available and durable, automatically replicating data across multiple AZs and continuously backing up data to Amazon Simple Storage Service (Amazon S3).

Amazon Neptune is designed to offer greater than 99.99% availability and automatically detect and recover from most database failures in less than 30 seconds.

Amazon Neptune also provides advanced security capabilities, including network security through Amazon Virtual Private Cloud (VPC), encryption at rest using AWS Key Management Service (KMS), and encryption in transit using Transport Layer Security (TLS).