Artwork

Contenido proporcionado por Conviction and Conviction | Pod People. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Conviction and Conviction | Pod People o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI

36:20
 
Compartir
 

Manage episode 422209686 series 3444082
Contenido proporcionado por Conviction and Conviction | Pod People. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Conviction and Conviction | Pod People o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.

They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.

Show Links:

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma

Show Notes:

(0:00) Introduction

(1:59) Key points of Tengyu’s research

(4:28) Academia compared to industry

(6:46) Voyage AI overview

(9:44) Enterprise RAG use cases

(15:23) LLM long-term memory and token limitations

(18:03) Agent chaining and data management

(22:01) Improving enterprise RAG

(25:44) Latency budgets

(27:48) Advice for building RAG systems

(31:06) Learnings as an AI founder

(32:55) The role of academia in AI

  continue reading

72 episodios

Artwork
iconCompartir
 
Manage episode 422209686 series 3444082
Contenido proporcionado por Conviction and Conviction | Pod People. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Conviction and Conviction | Pod People o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.

They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.

Show Links:

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma

Show Notes:

(0:00) Introduction

(1:59) Key points of Tengyu’s research

(4:28) Academia compared to industry

(6:46) Voyage AI overview

(9:44) Enterprise RAG use cases

(15:23) LLM long-term memory and token limitations

(18:03) Agent chaining and data management

(22:01) Improving enterprise RAG

(25:44) Latency budgets

(27:48) Advice for building RAG systems

(31:06) Learnings as an AI founder

(32:55) The role of academia in AI

  continue reading

72 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida