Accelerate Cloud-native AI and ML on Kubernetes Containers
Supercharged Kubernetes with NVIDIA Mellanox ConnectX

As Artificial Intelligence (AI) continues to revolutionize global economics and industry, today’s enterprises are increasing their AI investments to drive digital transformation, product innovation and new insights. ML/AI workloads are increasingly deployed on containers to maximize agility, flexibility, and ease of deployment. Using optimized networking and RDMA can supercharge containerized AI applications with unprecedented scalability and performance.
NVIDIA® Mellanox® ConnectX® network adapter cards unlock high-speed Kubernetes networking to accelerate and optimize containerized AI applications to deliver faster training and results.
In this technical paper, you'll learn:
- How to streamline and accelerate AI applications with networking and RDMA
- The advantages of deploying AI microservices on containers
- How RoCE/RDMA benefits cloud-native Kubernetes performance
- How to use NVIDIA ConnectX adapters to accelerate AI on Kubernetes