Artwork

Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits

24:08
 
Compartir
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 30, 2025 16:55 (2M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 468244511 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits

Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

58 episodios

Artwork
iconCompartir
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 30, 2025 16:55 (2M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 468244511 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits

Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

58 episodios

Tüm bölümler

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir