Artwork

Contenido proporcionado por Reply. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Reply o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

AI Insights - Tomorrow’s Tech Today (Episode 5 | S1) - Securing Artificial Intelligence: Insights from AI Security Expert Stefan Niedermaier

27:33
 
Compartir
 

Manage episode 450773028 series 3616471
Contenido proporcionado por Reply. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Reply o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

In this episode of "AI Insights – Tomorrow's Tech Today," Thomas Siebenhüner sits down with Stefan Niedermaier, a Senior Consultant at Like Reply and a leading expert in software development and IT security. The discussion pivots around the intricate relationship between AI and security within the realm of large language models (LLMs). They delve into the main security risks tied to LLM applications, including model inversion attacks, data poisoning, and the subtle nuances of prompt injections. Stefan explains the different layers of security necessary for robust AI applications and shares strategic insights on enhancing the privacy and security of training data. Throughout the conversation, they explore strategies to mitigate these risks and the broader implications for industries relying on generative AI technologies. This episode not only highlights the challenges but also the ongoing advancements in securing AI against emerging threats, making it a must-listen for professionals navigating the complex landscape of AI security.

  continue reading

7 episodios

Artwork
iconCompartir
 
Manage episode 450773028 series 3616471
Contenido proporcionado por Reply. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Reply o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

In this episode of "AI Insights – Tomorrow's Tech Today," Thomas Siebenhüner sits down with Stefan Niedermaier, a Senior Consultant at Like Reply and a leading expert in software development and IT security. The discussion pivots around the intricate relationship between AI and security within the realm of large language models (LLMs). They delve into the main security risks tied to LLM applications, including model inversion attacks, data poisoning, and the subtle nuances of prompt injections. Stefan explains the different layers of security necessary for robust AI applications and shares strategic insights on enhancing the privacy and security of training data. Throughout the conversation, they explore strategies to mitigate these risks and the broader implications for industries relying on generative AI technologies. This episode not only highlights the challenges but also the ongoing advancements in securing AI against emerging threats, making it a must-listen for professionals navigating the complex landscape of AI security.

  continue reading

7 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida