Artwork

Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal

39:16
 
Compartir
 

Manage episode 365414666 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.

Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs).

In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs.
If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

47 episodios

Artwork
iconCompartir
 
Manage episode 365414666 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.

Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs).

In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs.
If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

47 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir