IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

Nebul enhances AI inference with Mirantis's k0rdent platform

Yesterday

Netherlands-based private cloud service provider Nebul has implemented the open-source platform, k0rdent, developed by Mirantis, to enhance its production AI inference capabilities.

K0rdent, which was released recently, assists Nebul in offering an on-demand service for AI workloads, allowing customers to execute production AI inference workloads and maintain control over their critical data. CEO Alex Freedland of Mirantis, which maintains k0rdent, stated, "We believe open source is the enabler for infrastructure to support AI. Nebul is demonstrating the enormous potential of open technologies to solve one of the most complex challenges in IT today — delivering AI workloads reliably at scale."

Arnold Juffer, CEO and founder of Nebul, commented on the new capabilities, saying, "As demand for AI services grows, our challenge was transitioning our existing infrastructure. Using k0rdent enables us to effectively unify our diverse infrastructure across OpenStack, bare metal Kubernetes, while sunsetting the VMware technology stack and fully transforming to open source to streamline operations and accelerate our shift to Inference-as-a-Service for enterprise customers. Now, they can bring their trained AI model to their data and just run it with assurance of privacy and sovereignty in accordance with regulations. It's as simple as that."

The integration of k0rdent provides Nebul with the ability to manage diverse infrastructure, ensuring efficient operation and facilitating the transition to open-source technology, as explained by Seva Vayner, Product Director at Gcore. Vayner stated, "As Nebul is demonstrating, AI inference at scale requires infrastructure that dynamically adapts to end customer needs, ensuring AI applications meet the business requirements from a cost and performance standpoint."

With the deployment of k0rdent, Nebul has enhanced its capacity to support distributed AI inference, utilising high-performance NVIDIA-accelerated infrastructure to deliver optimised resources and low-latency service. This approach also includes efficient GPU utilisation for maximising operational efficiency.

In a related development, Mirantis announced its lightweight, zero-dependency Kubernetes distribution, k0s, along with its multi-cluster management tool, k0smotron, have joined the Cloud Native Computing Foundation (CNCF) as Sandbox projects. Jussi Nummelin, Senior Principal Engineer at Mirantis, remarked on this development, "Becoming a CNCF Sandbox project enables us to gather invaluable feedback, foster collaboration, and refine k0s and k0smotron to better serve the cloud-native community. We're excited to collaborate with the CNCF community and scale k0s' impact on cloud-native infrastructure."

Chris Sanders, Director of Engineering at Replicated, shared his appreciation for k0s, explaining, "We use k0s as the basis for our Embedded Cluster product which lets users distribute a Kubernetes cluster and application together as a single appliance, making it easy to manage those in tandem. We particularly appreciate the zero dependencies nature of k0s which lets us easily target a variety of environments with a high level of confidence."

With the integration into CNCF, k0s and k0smotron gain more visibility and support which can enhance their development and adoption in the cloud-native ecosystem.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X