Player FM - Internet Radio Done Right
Checked 3M ago
Agregado hace dos años
Contenido proporcionado por Ajeya Cotra, Kelsey Piper, Ajeya Cotra, and Kelsey Piper. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Ajeya Cotra, Kelsey Piper, Ajeya Cotra, and Kelsey Piper o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !
¡Desconecta con la aplicación Player FM !
Podcasts que vale la pena escuchar
PATROCINADO
“We don't want Idaho to have a bad reputation. This is our home state. We love our home state. It's beautiful. We pride ourselves on our nature. We pride ourselves on our wildlife. And instead, we are continuing to do things that are… that are sickening.” - Ella Driever In 1995, wolves were reintroduced to central Idaho, and in 2003 a Boise High school called Timberline officially adopted a local wolf pack. Throughout the 2000, students went on wolf tracking trips and in their wolf packs range. But in 2021, Idaho's legislature passed Senate Bill 1211, 1211 allows Idaho hunters to obtain an unlimited number of wolf tags, and it also allows Idaho's Department of Fish and Game to use taxpayer dollars to pay private contractors to kill wolves. That means bounties on wolves, including on public lands. And in 2021, the Idaho Fish and Game Commission expanded the wolf hunting season and hunting and trapping methods. So it's not too surprising to learn that also in 2021, the Timberline pack disappeared. The students, the ones that cared about wolves, at least, were devastated. Last summer I went to D.C. with some of the Species Unite team for a wolf rally on Capitol Hill. While I was there, two young women gave a talk about what happened at Timberline in 2021. Their names are Ella Driver and Sneha Sharma. They both graduated from Timberline High School and were there when their wolf pack disappeared. Please, listen and share.…
OpenAI's CBRN tests seem unclear
Manage episode 451302104 series 3461783
Contenido proporcionado por Ajeya Cotra, Kelsey Piper, Ajeya Cotra, and Kelsey Piper. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Ajeya Cotra, Kelsey Piper, Ajeya Cotra, and Kelsey Piper o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
OpenAI says o1-preview can't meaningfully help novices make chemical and biological weapons. Their test results don’t clearly establish this.
https://planned-obsolescence.org/openais-cbrn-tests-seem-unclear
16 episodios
Manage episode 451302104 series 3461783
Contenido proporcionado por Ajeya Cotra, Kelsey Piper, Ajeya Cotra, and Kelsey Piper. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Ajeya Cotra, Kelsey Piper, Ajeya Cotra, and Kelsey Piper o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
OpenAI says o1-preview can't meaningfully help novices make chemical and biological weapons. Their test results don’t clearly establish this.
https://planned-obsolescence.org/openais-cbrn-tests-seem-unclear
16 episodios
Todos los episodios
×OpenAI says o1-preview can't meaningfully help novices make chemical and biological weapons. Their test results don’t clearly establish this. https://planned-obsolescence.org/openais-cbrn-tests-seem-unclear
We should be spending less time proving today’s AIs are safe and more time figuring out how to tell if tomorrow’s AIs are dangerous: planned-obsolescence.org/dangerous-capability-tests-should-be-harder
This startlingly fast progress in LLMs was driven both by scaling up LLMs and doing schlep to make usable systems out of them. We think scale and schlep will both improve rapidly: planned-obsolescence.org/scale-schlep-and-systems
Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025: https://planned-obsolescence.org/language-models-surprised-us
Most new technologies don’t accelerate the pace of economic growth. But advanced AI might do this by massively increasing the research effort going into developing new technologies.
Both AI fears and AI hopes rest on the belief that it may be possible to build alien minds that can do everything we can do and much more. AI-driven technological progress could save countless lives and make everyone massively healthier and wealthier: https://planned-obsolescence.org/the-costs-of-caution…
Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year: https://planned-obsolescence.org/continuous-doesnt-mean-slow
Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress. https://planned-obsolescence.org/ais-accelerating-ai-research
The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more: https://www.planned-obsolescence.org/is-it-time-for-a-pause/…
We’re trying to think ahead to a possible future in which AI is making all the most important decisions: https://www.planned-obsolescence.org/what-were-doing-here/
Perfect alignment just means that AI systems won’t want to deliberately disregard their designers' intent; it's not enough to ensure AI is good for the world: https://www.planned-obsolescence.org/aligned-vs-good/
AI systems that have a precise understanding of how they’ll be evaluated and what behavior we want them to display will earn more reward than AI systems that don’t: https://www.planned-obsolescence.org/situational-awareness/
We're creating incentives for AI systems to make their behavior look as desirable as possible, while intentionally disregarding human intent when that conflicts with maximizing reward: https://www.planned-obsolescence.org/the-training-game/
If we can accurately recognize good performance on alignment, we could elicit lots of useful alignment work from our models, even if they're playing the training game: https://www.planned-obsolescence.org/training-ais-to-help-us-align-ais/
Many fellow alignment researchers may be operating under radically different assumptions from you: https://www.planned-obsolescence.org/disagreement-in-alignment/
Bienvenido a Player FM!
Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.