AWS boosts power, lowers costs with new EC2 instances
Amazon Web Services (AWS) has announced the release of three new instance offerings for Amazon Elastic Compute Cloud (Amazon EC2) - A1 instances, P3dn GPU instances and C5n instances.
“Two of the requests we get most from customers are how can you help us keep lowering our costs for basic workloads, and how can you make it more efficient to run our demanding, scale-out, high-performance computing and machine learning workloads in the cloud,” says AWS compute service vice president Matt Garman.
“With today’s introduction of A1 instances, we’re providing customers with a cost-optimised way to run distributed applications like containerised microservices. A1 instances are powered by our new custom-designed AWS Graviton processors with the Arm instruction set that leverages our expertise in building hyperscale cloud platforms for over a decade.
“For scale-out distributed workloads, our new P3dn instances and C5n instances offer 100 Gbps networking performance to speed distributed machine learning training and high-performance computing. These new instance launches expand what's already the industry’s most powerful and cost-effective computing platform to meet the needs of new and emerging workloads.”
The new A1 instances, now available, feature custom-designed AWS Graviton processors that deliver significant cost saving for scale-out workloads.
Although general purpose processors continue to provide great value for many workloads, new and emerging scale-out workloads like containerised microservices and web tier applications that do not rely on the x86 instruction set can gain additional cost and performance benefits from running on smaller and modern 64-bit Arm processors that work together to share an application’s computational load.
A1 instances feature a custom-designed processor (Graviton) that leverages AWS’s expertise in systems design and cloud infrastructure, making Arm processors available in the cloud for the first time.
With A1 instances, customers will benefit from up to a 45% cost reduction (compared to other Amazon EC2 general purpose instances) for scale-out workloads.
A1 instances are supported by several Linux distributions, including Amazon Linux 2, Red Hat, and Ubuntu, as well as container services, including Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Container Service for Kubernetes (EKS).
P3dn GPU instances
P3dn instances, available next week, will be the most powerful GPU instances in the cloud for machine learning training.
With the 100 Gbps networking performance of the new larger size P3dn instances, customers can further lower their training times to less than an hour by distributing their machine learning workload across multiple GPU instances.
The P3dn instances deliver a 4x increase in network throughput compared to existing P3 instances, providing up to 100 Gbps of networking throughput, fast NVMe instance storage, custom Intel CPUs with 96 vCPUs and support for AVX512 instructions, and NVIDIA Tesla V100 GPUs each with 32GB of memory.
This enables developers to linearly scale their model training performance across multiple instances, accelerate preprocessing and remove data transfer bottlenecks.
C5n instances, now available, significantly increase the maximum throughput performance available in AWS’s compute-intensive instance family.
C5 instances offer up to 25 Gbps of network bandwidth addressing the requirements of a wide range of workloads, but highly distributed and HPC applications can benefit from even higher network performance.
C5n instances offer 100 Gbps of network bandwidth, providing four times as much throughput as C5 instances.
This performance increase delivered with C5n instances enables previously network-bound applications to scale up or scale out effectively on AWS.
Customers can also take advantage of higher network performance to accelerate data transfer to and from Amazon Simple Storage Service (Amazon S3), reducing the data ingestion wait time for applications and speeding up delivery of results.