Artwork

Contenido proporcionado por Google DeepMind and Hannah Fry. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Google DeepMind and Hannah Fry o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

The Ethics of AI Assistants with Iason Gabriel

43:58
 
Compartir
 

Manage episode 449613575 series 2532352
Contenido proporcionado por Google DeepMind and Hannah Fry. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Google DeepMind and Hannah Fry o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.

In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.

Timecodes:

  • 00:00 Intro
  • 01:13 Definition of AI assistants
  • 04:05 A utopic view
  • 06:25 Iason’s background
  • 07:45 The Ethics of Advanced AI Assistants paper
  • 13:06 Anthropomorphism
  • 14:07 Turing perspective
  • 15:25 Anthropomorphism continued
  • 20:02 The value alignment question
  • 24:54 Deception
  • 27:07 Deployed at scale
  • 28:32 Agentic inequality
  • 31:02 Unfair outcomes
  • 34:10 Coordinated systems
  • 37:10 A new paradigm
  • 38:23 Tetradic value alignment
  • 41:10 The future
  • 42:41 Reflections from Hannah

Thanks to everyone who made this possible, including but not limited to:

  • Presenter: Professor Hannah Fry
  • Series Producer: Dan Hardoon
  • Editor: Rami Tzabar, TellTale Studios
  • Commissioner & Producer: Emma Yousif
  • Music composition: Eleni Shaw
  • Camera Director and Video Editor: Daniel Lazard
  • Audio Engineer: Perry Rogantin
  • Video Studio Production: Nicholas Duke
  • Video Editor: Bilal Merhi
  • Video Production Design: James Barton
  • Visual Identity and Design: Eleanor Tomlinson
  • Production support: Mo Dawoud
  • Commissioned by Google DeepMind

Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.

  continue reading

31 episodios

Artwork
iconCompartir
 
Manage episode 449613575 series 2532352
Contenido proporcionado por Google DeepMind and Hannah Fry. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Google DeepMind and Hannah Fry o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.

In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.

Timecodes:

  • 00:00 Intro
  • 01:13 Definition of AI assistants
  • 04:05 A utopic view
  • 06:25 Iason’s background
  • 07:45 The Ethics of Advanced AI Assistants paper
  • 13:06 Anthropomorphism
  • 14:07 Turing perspective
  • 15:25 Anthropomorphism continued
  • 20:02 The value alignment question
  • 24:54 Deception
  • 27:07 Deployed at scale
  • 28:32 Agentic inequality
  • 31:02 Unfair outcomes
  • 34:10 Coordinated systems
  • 37:10 A new paradigm
  • 38:23 Tetradic value alignment
  • 41:10 The future
  • 42:41 Reflections from Hannah

Thanks to everyone who made this possible, including but not limited to:

  • Presenter: Professor Hannah Fry
  • Series Producer: Dan Hardoon
  • Editor: Rami Tzabar, TellTale Studios
  • Commissioner & Producer: Emma Yousif
  • Music composition: Eleni Shaw
  • Camera Director and Video Editor: Daniel Lazard
  • Audio Engineer: Perry Rogantin
  • Video Studio Production: Nicholas Duke
  • Video Editor: Bilal Merhi
  • Video Production Design: James Barton
  • Visual Identity and Design: Eleanor Tomlinson
  • Production support: Mo Dawoud
  • Commissioned by Google DeepMind

Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.

  continue reading

31 episodios

Semua episod

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida