Google Cloud has today announced the launch of a brand new managed machine learning (ML) platform — dubbed Vertex AI.
The platform allows users to accelerate the deployment of AI models and requires almost 80% fewer lines of code to train a model, Google says. It also enables users to implement machine learning operations (MLOps) to build ML projects throughout the entire development lifecycle.
The search giant says Vertex AI unifies the Google Cloud services responsible for building ML, simplifying the process of building, training and deploying ML models.
“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create a industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” says Google Cloud vice president and general manager of cloud AI and industry solutions, Andrew Moore.
“We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.
Google says the lessons learned from its investment in AI has informed its ML capabilities — which the company has subsequently transferred over to Vertex AI's foundation.
According to Google, with Vertex AI data science and ML engineering teams can:
- Access the AI toolkit used internally to power Google, including computer vision, language, conversation, and structured data.
- Deploy more useful AI applications faster with new MLOps features like Vertex Vizier, which increases the rate of experimentation, the fully managed Vertex Feature Store, and Vertex Experiments to accelerate the deployment of models into production with faster model selection.
- Manage models by removing the complexity of self-service model maintenance and repeatability with MLOps tools like Vertex Continuous Monitoring and Vertex Pipelines to streamline the end-to-end ML workflow.
Omdia chief analyst for AI platforms, analytics and data management, Bradley Shimmin says the platform will go a long way in streamlining the machine learning lifecycle.
“Data science practitioners hoping to put AI to work across the enterprise aren't looking to wrangle tooling,” says Shimmin.
“Rather, they want tooling that can tame the ML lifecycle. Unfortunately, that is no small order.
“It takes a supportive infrastructure capable of unifying the user experience, plying AI itself as a supportive guide, and putting data at the very heart of the process -- all while encouraging the flexible adoption of diverse technologies.