When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species. Hosted by Soroush P ...
…
continue reading
1
Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)
2:42:17
2:42:17
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
2:42:17
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better unders…
…
continue reading
1
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
1:20:28
1:20:28
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:20:28
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity. We talk to Katja about: * How AI Impacts latest rigorous survey of leading AI researchers shows …
…
continue reading
1
Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)
1:21:26
1:21:26
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:21:26
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogo…
…
continue reading
1
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
1:37:19
1:37:19
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:37:19
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment. In 2022, Thomas spent ~75 hours putting together an overview of what everyone i…
…
continue reading
1
Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)
1:16:58
1:16:58
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:16:58
We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS". MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI researchers get established and start producing impactful research towards AI safety & alignment. Prior to MATS, Ryan comp…
…
continue reading
1
Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)
1:19:12
1:19:12
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:19:12
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more. We t…
…
continue reading
1
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)
1:07:23
1:07:23
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:07:23
We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML E…
…
continue reading
1
Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)
1:10:05
1:10:05
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:10:05
In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI. Prof Richard Dazeley is the Dep…
…
continue reading
1
Ep 6 - Will we see AGI this decade? Our AGI predictions & debate w/ Hunter Jay (CEO, Ripe Robotics)
1:20:58
1:20:58
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:20:58
In this episode, we have back on the show Hunter Jay, CEO Ripe Robotics, our co-host on Ep 1. We synthesise everything we've heard on AGI timelines from experts in Ep 1-5, take in more data points, and use this to give our own forecasts for AGI, ASI (i.e. superintelligence), and "intelligence explosion" (i.e. singularity). Importantly, we have diff…
…
continue reading
In this episode, we have back on our show Alex Browne, ML Engineer, who we heard on Ep2. He got in contact after watching recent developments in the 4 months since Ep2, which have accelerated his timelines for AGI. Hear why and his latest prediction. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp Link…
…
continue reading
1
Ep 4 - When will AGI arrive? - Ryan Kupyn (Data Scientist & Forecasting Researcher @ Amazon AWS)
1:03:23
1:03:23
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:03:23
In this episode, we speak with forecasting researcher & data scientist at Amazon AWS, Ryan Kupyn, about his timelines for the arrival of AGI. Ryan was recently ranked the #1 forecaster in Astral Codex Ten's 2022 Prediction contest, beating out 500+ other forecasters and proving himself to be a world-class forecaster. He has also done work in ML & w…
…
continue reading
1
Ep 3 - When will AGI arrive? - Jack Kendall (CTO, Rain.AI, maker of neural net chips)
1:01:34
1:01:34
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:01:34
In this episode, we speak with Rain.AI CTO Jack Kendall about his timelines for the arrival of AGI. He also speaks to how we might get there and some of the implications. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ Show links Jack Kendall Bio: Jack i…
…
continue reading
In this episode, we speak with ML Engineer Alex Browne about his forecasted timelines for the potential arrival of AGI. He also speaks to how we might get there and some of the implications. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ == Show links =…
…
continue reading
1
Ep 1 - When will AGI arrive? - Logan Riggs Smith (AGI alignment researcher)
1:10:51
1:10:51
Reproducir más Tarde
Reproducir más Tarde
Listas
Me gusta
Me gusta
1:10:51
We speak with AGI alignment researcher Logan Riggs Smith about his timelines for AGI. He also speaks to how we might get there and some of the implications. Hosted by Hunter Jay and Soroush Pour Show links Further writings from Logan Riggs Smith Cotra report on AGI timelines: Original report (very long) Scott Alexander analysis of this report…
…
continue reading
What can you expect to hear and learn on "The Artificial General Intelligence (AGI) Show with Soroush Pour"? Hosted by Soroush PourPor Soroush Pour
…
continue reading