Artwork

Contenido proporcionado por The Nonlinear Fund. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente The Nonlinear Fund o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

AF - Contra papers claiming superhuman AI forecasting by nikos

14:36
 
Compartir
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 (1M ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 440322712 series 3314709
Contenido proporcionado por The Nonlinear Fund. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente The Nonlinear Fund o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra papers claiming superhuman AI forecasting, published by nikos on September 12, 2024 on The AI Alignment Forum.
[Conflict of interest disclaimer: We are
FutureSearch, a company working on AI-powered forecasting and other types of quantitative reasoning. If thin LLM wrappers could achieve superhuman forecasting performance, this would obsolete a lot of our work.]
Widespread, misleading claims about AI forecasting
Recently we have seen a number of papers - (Schoenegger et al., 2024, Halawi et al., 2024, Phan et al., 2024, Hsieh et al., 2024) - with claims that boil down to "we built an LLM-powered forecaster that rivals human forecasters or even shows superhuman performance".
These papers do not communicate their results carefully enough, shaping public perception in inaccurate and misleading ways. Some examples of public discourse:
Ethan Mollick (>200k followers)
tweeted the following about the paper Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy by Schoenegger et al.:
A post on
Marginal Revolution
with the title and abstract of the paper
Approaching Human-Level Forecasting with Language Models by Halawi et al. elicits responses like
"This is something that humans are notably terrible at, even if they're paid to do it. No surprise that LLMs can match us."
"+1 The aggregate human success rate is a pretty low bar"
A
Twitter thread with >500k views on LLMs Are Superhuman Forecasters by Phan et al. claiming that "AI […] can predict the future at a superhuman level" had more than half a million views within two days of being published.
The number of such papers on AI forecasting, and the vast amount of traffic on misleading claims, makes AI forecasting a uniquely misunderstood area of AI progress. And it's one that matters.
What does human-level or superhuman forecasting mean?
"Human-level" or "superhuman" is a hard-to-define concept. In an academic context, we need to work with a reasonable operationalization to compare the skill of an AI forecaster with that of humans.
One reasonable and practical definition of a superhuman forecasting AI forecaster is
The AI forecaster is able to consistently outperform the crowd forecast on a sufficiently large number of randomly selected questions on a high-quality forecasting platform.[1]
(For a human-level forecaster, just replace "outperform" with "performs on par with".)
Red flags for claims to (super)human AI forecasting accuracy
Our experience suggests there are a number of things that can go wrong when building AI forecasting systems, including:
1. Failing to find up-to-date information on the questions. It's inconceivable on most questions that forecasts can be good without basic information.
Imagine trying to forecast the US presidential election without knowing that Biden dropped out.
2. Drawing on up-to-date, but low-quality information. Ample experience shows low quality information confuses LLMs even more than it confuses humans.
Imagine forecasting election outcomes with biased polling data.
Or, worse, imagine forecasting OpenAI revenue based on claims like
> The number of ChatGPT Plus subscribers is estimated between 230,000-250,000 as of October 2023.
without realising that this mixing up ChatGPT vs ChatGPT mobile.
3. Lack of high-quality quantitative reasoning. For a decent number of questions on Metaculus, good forecasts can be "vibed" by skilled humans and perhaps LLMs. But for many questions, simple calculations are likely essential. Human performance shows systematic accuracy nearly always requires simple models such as base rates, time-series extrapolations, and domain-specific numbers.
Imagine forecasting stock prices without having, and using, historical volatility.
4. Retrospective, rather than prospective, forecasting (e.g. forecasting questions that have al...
  continue reading

2437 episodios

Artwork
iconCompartir
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 (1M ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 440322712 series 3314709
Contenido proporcionado por The Nonlinear Fund. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente The Nonlinear Fund o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra papers claiming superhuman AI forecasting, published by nikos on September 12, 2024 on The AI Alignment Forum.
[Conflict of interest disclaimer: We are
FutureSearch, a company working on AI-powered forecasting and other types of quantitative reasoning. If thin LLM wrappers could achieve superhuman forecasting performance, this would obsolete a lot of our work.]
Widespread, misleading claims about AI forecasting
Recently we have seen a number of papers - (Schoenegger et al., 2024, Halawi et al., 2024, Phan et al., 2024, Hsieh et al., 2024) - with claims that boil down to "we built an LLM-powered forecaster that rivals human forecasters or even shows superhuman performance".
These papers do not communicate their results carefully enough, shaping public perception in inaccurate and misleading ways. Some examples of public discourse:
Ethan Mollick (>200k followers)
tweeted the following about the paper Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy by Schoenegger et al.:
A post on
Marginal Revolution
with the title and abstract of the paper
Approaching Human-Level Forecasting with Language Models by Halawi et al. elicits responses like
"This is something that humans are notably terrible at, even if they're paid to do it. No surprise that LLMs can match us."
"+1 The aggregate human success rate is a pretty low bar"
A
Twitter thread with >500k views on LLMs Are Superhuman Forecasters by Phan et al. claiming that "AI […] can predict the future at a superhuman level" had more than half a million views within two days of being published.
The number of such papers on AI forecasting, and the vast amount of traffic on misleading claims, makes AI forecasting a uniquely misunderstood area of AI progress. And it's one that matters.
What does human-level or superhuman forecasting mean?
"Human-level" or "superhuman" is a hard-to-define concept. In an academic context, we need to work with a reasonable operationalization to compare the skill of an AI forecaster with that of humans.
One reasonable and practical definition of a superhuman forecasting AI forecaster is
The AI forecaster is able to consistently outperform the crowd forecast on a sufficiently large number of randomly selected questions on a high-quality forecasting platform.[1]
(For a human-level forecaster, just replace "outperform" with "performs on par with".)
Red flags for claims to (super)human AI forecasting accuracy
Our experience suggests there are a number of things that can go wrong when building AI forecasting systems, including:
1. Failing to find up-to-date information on the questions. It's inconceivable on most questions that forecasts can be good without basic information.
Imagine trying to forecast the US presidential election without knowing that Biden dropped out.
2. Drawing on up-to-date, but low-quality information. Ample experience shows low quality information confuses LLMs even more than it confuses humans.
Imagine forecasting election outcomes with biased polling data.
Or, worse, imagine forecasting OpenAI revenue based on claims like
> The number of ChatGPT Plus subscribers is estimated between 230,000-250,000 as of October 2023.
without realising that this mixing up ChatGPT vs ChatGPT mobile.
3. Lack of high-quality quantitative reasoning. For a decent number of questions on Metaculus, good forecasts can be "vibed" by skilled humans and perhaps LLMs. But for many questions, simple calculations are likely essential. Human performance shows systematic accuracy nearly always requires simple models such as base rates, time-series extrapolations, and domain-specific numbers.
Imagine forecasting stock prices without having, and using, historical volatility.
4. Retrospective, rather than prospective, forecasting (e.g. forecasting questions that have al...
  continue reading

2437 episodios

All episodes

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida