Artwork

Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI

35:20
 
Compartir
 

Manage episode 374543658 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

Watch the video for this episode at: https://mlsecops.com/podcast/exploring-ai/ml-security-risks-at-black-hat-usa-2023
This episode of The MLSecOps Podcast features expert security leaders who sat down at Black Hat USA 2023 last week with team members from Protect AI to talk about various facets of AI and machine learning security:

- What is the overall state of the AI/ML security realm at this time?
- What is currently the largest threat to AI and machine learning security?
- What is the most important thing we need to focus on at this time in the machine learning security space?
- Plus much more!
Guest Speaker Information:
Adam Nygate
Adam is the founder of huntr.dev (recently acquired by Protect AI), the place to protect open-source software, where they not only provide bug and fix bounties, but have built a platform that automates triaging, creating a noise-free environment for maintainers. Adam’s work with the huntr.dev community of over 10k security contributors enabled them to become the 5th largest CNA in the world in 2022. Adam was included in the OpenUK’s 2022 Honours List and has been featured in articles by AWS, Nominet and on podcasts such as Zero Hour and Absolute AppSec.
Daniel Miessler
"Daniel is the founder of Unsupervised Learning, a company focused on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in Cybersecurity, and has spent the last several years focused on applying AI to business and human problems."
Adam Shostack
Check out Adam's fantastic book, "Threats: What Every Engineer Should Learn From Star Wars!"
Adam is a leading expert on threat modeling, a consultant, expert witness, and game designer.
Christina Liaghati, PhD
Dr. Liaghati is the AI Strategy Execution & Operations Manager, AI & Autonomy Innovation Center at MITRE, and a wealth of knowledge about all things MITRE ATLAS and the ML system attack chain. She is a highly sought-after speaker and is a repeat guest of ours on The MLSecOps

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 episodios

Artwork
iconCompartir
 
Manage episode 374543658 series 3461851
Contenido proporcionado por MLSecOps.com. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente MLSecOps.com o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Send us a text

Watch the video for this episode at: https://mlsecops.com/podcast/exploring-ai/ml-security-risks-at-black-hat-usa-2023
This episode of The MLSecOps Podcast features expert security leaders who sat down at Black Hat USA 2023 last week with team members from Protect AI to talk about various facets of AI and machine learning security:

- What is the overall state of the AI/ML security realm at this time?
- What is currently the largest threat to AI and machine learning security?
- What is the most important thing we need to focus on at this time in the machine learning security space?
- Plus much more!
Guest Speaker Information:
Adam Nygate
Adam is the founder of huntr.dev (recently acquired by Protect AI), the place to protect open-source software, where they not only provide bug and fix bounties, but have built a platform that automates triaging, creating a noise-free environment for maintainers. Adam’s work with the huntr.dev community of over 10k security contributors enabled them to become the 5th largest CNA in the world in 2022. Adam was included in the OpenUK’s 2022 Honours List and has been featured in articles by AWS, Nominet and on podcasts such as Zero Hour and Absolute AppSec.
Daniel Miessler
"Daniel is the founder of Unsupervised Learning, a company focused on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in Cybersecurity, and has spent the last several years focused on applying AI to business and human problems."
Adam Shostack
Check out Adam's fantastic book, "Threats: What Every Engineer Should Learn From Star Wars!"
Adam is a leading expert on threat modeling, a consultant, expert witness, and game designer.
Christina Liaghati, PhD
Dr. Liaghati is the AI Strategy Execution & Operations Manager, AI & Autonomy Innovation Center at MITRE, and a wealth of knowledge about all things MITRE ATLAS and the ML system attack chain. She is a highly sought-after speaker and is a repeat guest of ours on The MLSecOps

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir