Artwork

Contenido proporcionado por Massive Studios. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Massive Studios o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

Validation and Guardrails for LLMs

27:27
 
Compartir
 

Manage episode 402066337 series 2285741
Contenido proporcionado por Massive Studios. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Massive Studios o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Shreya Rajpal (@ShreyaR, CEO @guardrails_ai ) talks about the need to provide guardrails and validation of LLM’s, along with common use cases and Guardrail AI’s new Hub.
SHOW: 797
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:

SHOW NOTES:

Topic 1 - Welcome to the show. Before we dive into today’s discussion, tell us a little bit about your background.
Topic 2 - Our topic today is the validation and accuracy of AI with guardrails. Let’s start with the why… Why do we need guardrails for LLMs today?
Topic 3 - Where and how do you control (maybe validate is a better word) outputs from LLM’s today? What are your thoughts on the best way to validate outputs?
Topic 4 - Will this workflow work with both closed-source (ChatGPT) and opensource (Llama2) models? Would this process apply to training/fine-tuning or more for inference? Would this potentially replace humans in the loop that we see today or is this completely different?
Topic 5 - What are some of the most common early use cases and practical examples? PII detection comes to mind, violation of ethics or laws, off-topic/out of scope, or simply just something the model isn’t designed to provide?
Topic 6 - What happens if it fails? Does this create a loop scenario to try again?
Topic 7 - Let’s talk about Guardrails AI specifically. Today you offer an open-source marketplace of Validators in the Guardrails Hub, correct? As we mentioned earlier, almost everyone’s implementation and guardrails they want to implement will be different. Is the best way to think about this as building blocks using validators that are pieced together? Tell everyone a little bit about the offering
FEEDBACK?

  continue reading

872 episodios

Artwork

Validation and Guardrails for LLMs

The Cloudcast

1,281 subscribers

published

iconCompartir
 
Manage episode 402066337 series 2285741
Contenido proporcionado por Massive Studios. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Massive Studios o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Shreya Rajpal (@ShreyaR, CEO @guardrails_ai ) talks about the need to provide guardrails and validation of LLM’s, along with common use cases and Guardrail AI’s new Hub.
SHOW: 797
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:

SHOW NOTES:

Topic 1 - Welcome to the show. Before we dive into today’s discussion, tell us a little bit about your background.
Topic 2 - Our topic today is the validation and accuracy of AI with guardrails. Let’s start with the why… Why do we need guardrails for LLMs today?
Topic 3 - Where and how do you control (maybe validate is a better word) outputs from LLM’s today? What are your thoughts on the best way to validate outputs?
Topic 4 - Will this workflow work with both closed-source (ChatGPT) and opensource (Llama2) models? Would this process apply to training/fine-tuning or more for inference? Would this potentially replace humans in the loop that we see today or is this completely different?
Topic 5 - What are some of the most common early use cases and practical examples? PII detection comes to mind, violation of ethics or laws, off-topic/out of scope, or simply just something the model isn’t designed to provide?
Topic 6 - What happens if it fails? Does this create a loop scenario to try again?
Topic 7 - Let’s talk about Guardrails AI specifically. Today you offer an open-source marketplace of Validators in the Guardrails Hub, correct? As we mentioned earlier, almost everyone’s implementation and guardrails they want to implement will be different. Is the best way to think about this as building blocks using validators that are pieced together? Tell everyone a little bit about the offering
FEEDBACK?

  continue reading

872 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida