Artwork

Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD

39:48
 
Compartir
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 30, 2025 16:55 (2M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 361106272 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE.

Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris discuss various AI and machine learning security topics with Dr. Liaghati, including the contrasts between the MITRE ATT&CK matrices focused on traditional cybersecurity, and the newer AI-focused MITRE ATLAS matrix.

The group also dives into consideration of new classifications of ML attacks related to large language models, ATLAS case studies, security practices such as ML red teaming; and integrating security into MLOps.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

58 episodios

Artwork
iconCompartir
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 30, 2025 16:55 (2M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 361106272 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE.

Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris discuss various AI and machine learning security topics with Dr. Liaghati, including the contrasts between the MITRE ATT&CK matrices focused on traditional cybersecurity, and the newer AI-focused MITRE ATLAS matrix.

The group also dives into consideration of new classifications of ML attacks related to large language models, ATLAS case studies, security practices such as ML red teaming; and integrating security into MLOps.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

58 episodios

كل الحلقات

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir