Inside distributed inference with llm-d ft. Carlos Costa
MP3•Epizód kép
Manage episode 498555050 series 3668811
A tartalmat a Red Hat biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Red Hat vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Scaling LLM inference for production isn't just about adding more machines, it demands new intelligence in the infrastructure itself. In this episode, we're joined by Carlos Costa, Distinguished Engineer at IBM Research, a leader in large-scale compute and a key figure in the llm-d project. We discuss how to move beyond single-server deployments and build the intelligent, AI-aware infrastructure needed to manage complex workloads efficiently. Carlos Costa shares insights from his deep background in HPC and distributed systems, including: • The evolution from traditional HPC and large-scale training to the unique challenges of distributed inference for massive models. • The origin story of the llm-d project, a collaborative, open-source effort to create a much-needed ""common AI stack"" and control plane for the entire community. • How llm-d extends Kubernetes with the specialization required for AI, enabling state-aware scheduling that standard Kubernetes wasn't designed for. • Key architectural innovations like the disaggregation of prefill and decode stages and support for wide parallelism to efficiently run complex Mixture of Experts (MOE) models. Tune in to discover how this collaborative, open-source approach is building the standardized, AI-aware infrastructure necessary to make massive AI models practical, efficient, and accessible for everyone.
…
continue reading
6 epizódok