Building more efficient AI with vLLM ft. Nick Hill
MP3•Epizód kép
Manage episode 492075476 series 3668811
A tartalmat a Red Hat biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Red Hat vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Explore what it takes to run massive language models efficiently with Red Hat's Senior Principal Software Engineer in AI Engineering, Nick Hill. In this episode, we go behind the headlines to uncover the systems-level engineering making AI practical, focusing on the pivotal challenge of inference optimization and the transformative power of the vLLM open-source project. Nick Hill shares his experiences working in AI including: • The evolution of AI optimization, from early handcrafted systems like IBM Watson to the complex demands of today's generative AI. • The critical role of open-source projects like vLLM in creating a common, efficient inference stack for diverse hardware platforms. • Key innovations like PagedAttention that solve GPU memory fragmentation and manage the KV cache for scalable, high-throughput performance. • How the open-source community is rapidly translating academic research into real-world, production-ready solutions for AI. Join us to explore the infrastructure and optimization strategies making large-scale AI a reality. This conversation is essential for any technologist, engineer, or leader who wants to understand the how and why of AI performance. You’ll come away with a new appreciation for the clever, systems-level work required to build a truly scalable and open AI future.
…
continue reading
6 epizódok