IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

Elastic & Hugging Face collaborate to simplify GenAI development

Today

Elastic has announced a new partnership with Hugging Face, integrating Hugging Face's models into the Elasticsearch Open Inference API.

This integration introduces the native chunking feature through the semantic_text field, aimed at simplifying generative AI (GenAI) application development.

The collaboration aims to assist developers in building semantic search experiences without the need for custom chunking logic. The integration is designed to streamline the process, enabling developers to deploy their applications more quickly and efficiently.

"Combining Hugging Face's embeddings with Elastic's retrieval relevance tools helps users gain better insights and improve search functionality," said Jeff Boudier, head of product at Hugging Face. "Hugging Face makes it easy for developers to build their own AI. With this integration, developers get a complete solution to leverage the best open models for semantic search, hosted on Hugging Face multi-cloud GPU infrastructure, to build semantic search experiences in Elasticsearch without worrying about storing or chunking embeddings."

This enhancement to Elasticsearch is aimed at easing the development process by embedding the ability to handle and store these data chunks natively within its system. The integration capitalises on Hugging Face Inference Endpoints, seeking to provide a seamless and efficient toolset for developers working with generative AI technologies.

Matt Riley, global vice president and general manager of search at Elastic, elaborated on the importance of this integration for developers. "Developers are at the heart of our business, and extending more of our GenAI and search primitives to Hugging Face developers deepens our collaboration," he said.

"The integration of our new semantic_text field simplifies the process of chunking and storing embeddings, so developers can focus on what matters most, building great applications."

The incorporation of native chunking in Elasticsearch is set to enhance the functionality and user experience of applications using Hugging Face embeddings.

By removing the necessity for developers to write custom chunking logic, the integration is intended to reduce development time and complexity.

This move follows the previous addition of Hugging Face embedding models to Elastic's Open Inference API. The partnership exemplifies a growing trend in the technology sector, where companies collaborate to merge their strengths and provide comprehensive solutions to common challenges faced by developers.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X