Artwork

Contenido proporcionado por Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01

22:59
 
Compartir
 

Manage episode 421983738 series 3578042
Contenido proporcionado por Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI:
✅ Taylor Swift's deepfake scandal and what it did for RAIg
✅ Do famous people need to be harmed before we do anything about it?
✅ How to address the deepfake problem at the systemic and symptomatic levels
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
0:00 - Introduction
01:20 - Taylor Swift Deepfakes: what happened
02:43 - Does disaster need to strike famous people for us to move the needle?
06:31 - What role can RAI play to address this deepfake problem?
07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
11:45 - The post-truth era powered by AI
15:40 - Watermarking AI generated content and the difficulty
19:26 - The enshittification of the internet
22:00- Three actionable takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Capíttulos

1. 🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01 (00:00:00)

2. Taylor Swift Deepfakes: what happened (00:01:20)

3. Does disaster need to strike famous people for us to move the needle? (00:02:43)

4. Does disaster need to strike famous people for us to move the needle? (00:06:31)

5. Disagreement! Deep fakes have both systemic and symptomatic causes (00:07:19)

6. Deep fakes, Elections, EU AI Act, and US State legislations (00:09:28)

7. The post-truth era powered by AI (00:11:45)

8. Watermarking AI generated content and the difficulty (00:15:40)

9. The enshittification of the internet (00:19:26)

10. Three actionable takeaways (00:22:00)

6 episodios

Artwork
iconCompartir
 
Manage episode 421983738 series 3578042
Contenido proporcionado por Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI:
✅ Taylor Swift's deepfake scandal and what it did for RAIg
✅ Do famous people need to be harmed before we do anything about it?
✅ How to address the deepfake problem at the systemic and symptomatic levels
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
0:00 - Introduction
01:20 - Taylor Swift Deepfakes: what happened
02:43 - Does disaster need to strike famous people for us to move the needle?
06:31 - What role can RAI play to address this deepfake problem?
07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
11:45 - The post-truth era powered by AI
15:40 - Watermarking AI generated content and the difficulty
19:26 - The enshittification of the internet
22:00- Three actionable takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Capíttulos

1. 🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01 (00:00:00)

2. Taylor Swift Deepfakes: what happened (00:01:20)

3. Does disaster need to strike famous people for us to move the needle? (00:02:43)

4. Does disaster need to strike famous people for us to move the needle? (00:06:31)

5. Disagreement! Deep fakes have both systemic and symptomatic causes (00:07:19)

6. Deep fakes, Elections, EU AI Act, and US State legislations (00:09:28)

7. The post-truth era powered by AI (00:11:45)

8. Watermarking AI generated content and the difficulty (00:15:40)

9. The enshittification of the internet (00:19:26)

10. Three actionable takeaways (00:22:00)

6 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida