Artwork

Contenido proporcionado por Alex Kantrowitz. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Alex Kantrowitz o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig

48:13
 
Compartir
 

Manage episode 426913595 series 2781245
Contenido proporcionado por Alex Kantrowitz. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Alex Kantrowitz o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what's troubling ex-OpenAI safety team members. We discuss whether the Saudners' former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the 'Right to Warn' a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community.

----

You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

  continue reading

327 episodios

Artwork
iconCompartir
 
Manage episode 426913595 series 2781245
Contenido proporcionado por Alex Kantrowitz. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Alex Kantrowitz o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what's troubling ex-OpenAI safety team members. We discuss whether the Saudners' former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the 'Right to Warn' a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community.

----

You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

  continue reading

327 episodios

所有剧集

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida