Cloudera, a trusted enterprise AI company, has announced the expansion of its collaboration with NVIDIA to enhance generative AI capabilities. The partnership will enable faster, more secure, and simplified generative AI workflows and accelerate the implementation of AI applications from pilot to production.
By incorporating NVIDIA NIM microservices into the Cloudera Data Platform, Cloudera aims to help customers derive actionable insights from gen AI to drive real-world business applications. Developers now have the ability to customise and deploy enterprise-grade Large Language Models (LLMs), unlocking rich data value through high-performance AI workflows, AI platform software, and accelerated computing wherever data resides.
This collaboration will leverage NVIDIA AI Enterprise, specifically NVIDIA NIM microservices, with Cloudera aiming to unlock the potential of a staggering 25 exabytes of enterprise data secured within the Cloudera Data Platform. The combination of enterprise data with a full-stack platform, optimised for LLM, plays a decisive role in progressing generative AI applications from the pilot to the production stage.
NVIDIA NIM and NeMo Retriever microservices facilitate AI model connection to business data, including text, images, and visualisations, to generate accurate, contextually relevant inputs. Developers deploying applications through NVIDIA AI Enterprise can customise and deploy top-tier LLMs. Cloudera Machine Learning will enable customers to mediate the value of their enterprise data managed by Cloudera, bringing superior AI workflows, AI software and hardened computing to the data, regardless of location.
Cloudera intends to incorporate multiple integrations with NVIDIA microservices. Cloudera Machine Learning will integrate model and application serving powered by NVIDIA microservices to improve model inference performance across all workloads. With this new AI model-serving function, customers can attain fault-tolerance, low-latency serving, and auto-scaling for models deployed anywhere, both in public and private clouds.
The collaboration also means that Cloudera will offer integrated NVIDIA NeMo Retriever microservices to simplify the connection of custom LLMs to enterprise data. This capability will further enable users to create retrieval-augmented generation (RAG)-based applications ready for production use.
Priyank Patel, Vice President of AI/ML Products at Cloudera, said, "Cloudera is integrating NVIDIA NIM and CUDA-X microservices to power Cloudera Machine Learning, helping customers turn AI hype into business reality."
"In addition to delivering powerful generative AI capabilities and performance to customers, the results of this integration will empower enterprises to make more accurate and timely decisions while also mitigating inaccuracies, hallucinations, and errors in predictions, all critical factors for navigating today's data landscape."
Justin Boitano, Vice President of Enterprise Products at NVIDIA, expressed that enterprises are eager to use high volumes of data for generative AI to build custom co-pilots and productivity tools. He stated, "The integration of NVIDIA NIM microservices into the Cloudera Data Platform offers developers a way to more easily and flexibly deploy LLMs to drive business transformation."