IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Vultr unveils serverless platform to accelerate global AI deployment
Wed, 20th Mar 2024

Vultr, a private cloud computing platform, has unveiled Vultr Cloud Inference, a serverless platform designed to streamline AI model deployment and automatically scale AI applications globally. Offering enhanced cost efficiency, reduced latency, and seamless scalability, the new platform is set to revolutionise the deployment and scalability of international AI applications across 32 locations spanning six continents.

Today's swiftly changing digital ecosystem requires businesses in all sectors to manage and deploy AI models efficiently. The increasing need for cloud infrastructure platforms optimised for inference that can also offer both scalability and global reach has challenged developers to optimise AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency. Recognising this, Vultr conceived Cloud Inference, a solution designed to accelerate the deployment of AI applications globally.

Vultr Cloud Inference will expedite the market arrival of AI-driven features, such as predictive and real-time decision-making, while providing a compelling user experience across diverse regions. By allowing users to bring their own model, trained on any platform, cloud, or on-premises, Vultr ensures seamless integration and deployment on its global NVIDIA GPU-powered infrastructure. This launch aligns with local data sovereignty, data residency, and privacy regulations, thus assisting businesses comply with legal requirements and achieve business objectives by deploying their AI applications in pertinent areas.

J.J. Kardwell, CEO of Vultr's parent company, Constant, stated, "As an increasing number of AI models move from training into production, the volume of inference workloads is exploding, but the majority of AI infrastructure is not optimised to meet the world's inference needs. The launch of Vultr Cloud Inference enables AI innovations to have maximum impact by simplifying AI deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach."

Vultr Cloud Inference's ability to self-optimise and auto-scale globally in real-time ensures AI applications provide consistent, cost-effective, low-latency experiences to users worldwide. The serverless architecture simplifies infrastructure management, offering unparalleled outcomes including: flexibility in AI model integration and migration, reduced AI infrastructure complexity, automated scaling of inference-optimised infrastructure, and access to private, dedicated compute resources. Businesses are therefore enabled to focus more on innovative AI initiatives, than on infrastructure management.

Matt McGrigg, Director of Global Business Development at Cloud Partners, NVIDIA, said, "Demand is rapidly increasing for cutting-edge AI technologies that can power AI workloads worldwide. The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally."

The launch of Vultr Cloud Inference comes in the wake of Vultr CDN, which was created to scale worldwide media and content delivery. It now stands to provide the technological foundation to empower innovation, enhance cost efficiency, and expand the global reach of organisations worldwide, across industries. Early access registration for Vultr Cloud Inference is currently open.