IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

AnswerRocket backs Gemini & Claude LLMs for flexible AI solutions

Thu, 12th Sep 2024

AnswerRocket, a GenAI-powered analytics company based in Atlanta, has announced its support for both Gemini large language models (LLMs) and the Anthropic Claude family of LLMs. This development is part of an ongoing commitment to LLM agnosticism, allowing customers to select the language model that best suits their business needs.

Alon Goren, CEO of AnswerRocket, explained the company's approach: "AnswerRocket's philosophy centres around composable AI solutions, enabling customers to have complete flexibility in their choice of cloud provider, LLMs, and models used within their AI assistants."

"This composability allows businesses to build AI solutions tailored to their specific requirements, ensuring they can leverage the best tools and technologies available in the market. We're giving customers the power to balance model speed, cost, and capabilities as new models quickly arrive on the market."

AnswerRocket's platform integrates a variety of LLM models which businesses can deploy to create conversational AI assistants for data analysis. The supported models include OpenAI's GPT-4o and GPT-4o mini, Google's Gemini Flash and Gemini Pro, and Anthropic's Claude 3.5 Sonnet and Claude 3 Opus. This integration enables businesses to select different LLMs for various tasks, optimising their data insights using the most appropriate models.

Adopting an LLM-agnostic approach to analytical challenges provides companies with the ability to optimise, customise, and build resilience into their data analysis processes. This flexibility is increasingly critical in a constantly evolving AI landscape. Businesses can now meet specific tasks, whether they involve structured or unstructured data, with greater precision.

AnswerRocket accelerates AI implementation by providing robust yet adaptable solutions, the company states. The platform supports model selection for various functions such as chat experiences, narrative composition, embeddings, and response evaluation. Additionally, it allows for the adjustment of LLM settings, including token limits and cost figures, and supports the combination of models within workflows to address complex data environments. LLM activity can also be monitored through open tools like LangSmith, ensuring greater oversight and accountability.

Responsible AI usage is a key component of AnswerRocket's framework, which incorporates proven foundation models with built-in safeguards to prevent biases and promote fairness. Customers can tailor these safeguards to align with their specific ethical standards, supporting the development of transparent and ethical AI assistants.

The responsible AI framework also includes rigorous automated testing and evaluation, ensuring that each Skill or AI Assistant is validated for consistency, robustness, and alignment with ground truth data. By embedding these practices into the Software Development Life Cycle (SDLC), AnswerRocket minimises the need for human oversight while upholding high standards of reliability and accountability.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X