1,752 subscribers
¡Desconecta con la aplicación Player FM !
Powering AI with the World's Largest Computer Chip with Joel Hestness - #684
Manage episode 418091008 series 2355587
Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more.
The complete show notes for this episode can be found at twimlai.com/go/684.
746 episodios
Powering AI with the World's Largest Computer Chip with Joel Hestness - #684
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 418091008 series 2355587
Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more.
The complete show notes for this episode can be found at twimlai.com/go/684.
746 episodios
Kaikki jaksot
×

1 Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727 1:34:06




1 Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725 1:09:07






1 Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722 42:11


1 Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721 49:29


1 Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720 1:07:05




1 AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718 1:44:59


1 Speculative Decoding and Efficient LLM Inference with Chris Lott - #717 1:16:30








1 Why Agents Are Stupid & What We Can Do About It with Dan Jeffries - #713 1:08:49
Bienvenido a Player FM!
Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.