Artwork

Contenido proporcionado por Reimagining Cyber. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Reimagining Cyber o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

AI and ChatGPT - Security, Privacy and Ethical Ramifications - Ep 62

27:13
 
Compartir
 

Manage episode 359870152 series 3361845
Contenido proporcionado por Reimagining Cyber. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Reimagining Cyber o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

This episode features “the expert in ChatGPT”, Stephan Jou. He is CTO of Security Analytics at OpenText Cybersecurity.
“The techniques that we are developing are becoming so sophisticated and scalable that it's really become the only viable method to detect increasingly sophisticated and subtle attacks when the data volumes and velocity are so huge. So think about nation state attacks where you have very advanced adversaries that are using uncommon tools that won't be on any sort of blacklist.”
“In the past five years or so, I've become increasingly interested in the ethical and responsible application of AI. Pure AI is kind of like pure math. It's neutral. It doesn't have an angle to it, but applied AI is a different story. So all of a sudden you have to think about the implications of your AI product, the data that you're using, and whether your AI product can be weaponized or misled.”

“You call me the expert in ChatGPT. I sort of both love it and hate it. I love it because people like me are starting to get so much attention and I hate it because it's sort of highlighted some areas of potential risk associated with AI that people are only start now starting to realize.”
“I'm very much looking forward to using technologies that can understand code and code patterns and how code gets assembled together and built into a product in a human-like way to be able to sort of detect software vulnerabilities. That's a fascinating area of development and research that's going on right now in our labs.”
“[on AI poisoning] The good news is, this is very difficult to do in practice. A lot of the papers that we see on AI poisoning, they're much more theoretical than they are practical.”
Follow or subscribe to the show on your preferred podcast platform.
Share the show with others in the cybersecurity world.
Get in touch via reimaginingcyber@gmail.com

  continue reading

103 episodios

Artwork
iconCompartir
 
Manage episode 359870152 series 3361845
Contenido proporcionado por Reimagining Cyber. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Reimagining Cyber o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

This episode features “the expert in ChatGPT”, Stephan Jou. He is CTO of Security Analytics at OpenText Cybersecurity.
“The techniques that we are developing are becoming so sophisticated and scalable that it's really become the only viable method to detect increasingly sophisticated and subtle attacks when the data volumes and velocity are so huge. So think about nation state attacks where you have very advanced adversaries that are using uncommon tools that won't be on any sort of blacklist.”
“In the past five years or so, I've become increasingly interested in the ethical and responsible application of AI. Pure AI is kind of like pure math. It's neutral. It doesn't have an angle to it, but applied AI is a different story. So all of a sudden you have to think about the implications of your AI product, the data that you're using, and whether your AI product can be weaponized or misled.”

“You call me the expert in ChatGPT. I sort of both love it and hate it. I love it because people like me are starting to get so much attention and I hate it because it's sort of highlighted some areas of potential risk associated with AI that people are only start now starting to realize.”
“I'm very much looking forward to using technologies that can understand code and code patterns and how code gets assembled together and built into a product in a human-like way to be able to sort of detect software vulnerabilities. That's a fascinating area of development and research that's going on right now in our labs.”
“[on AI poisoning] The good news is, this is very difficult to do in practice. A lot of the papers that we see on AI poisoning, they're much more theoretical than they are practical.”
Follow or subscribe to the show on your preferred podcast platform.
Share the show with others in the cybersecurity world.
Get in touch via reimaginingcyber@gmail.com

  continue reading

103 episodios

كل الحلقات

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida