Artwork

Contenido proporcionado por HackerNoon. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente HackerNoon o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

LLMs Cannot Find Reasoning Errors, but They Can Correct Them!

6:30
 
Compartir
 

Manage episode 421464394 series 3474148
Contenido proporcionado por HackerNoon. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente HackerNoon o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

This story was originally published on HackerNoon at: https://hackernoon.com/llms-cannot-find-reasoning-errors-but-they-can-correct-them.
In this paper, we break down the self-correction process into two core components: mistake finding and output correction.
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #llms, #llm-mistake-finding, #llm-output-correction, #big-bench-mistake, #chain-of-thought, #nlp, #self-consistency, #zero-shot-prompting, and more.
This story was written by: @textmodels. Learn more about this writer by checking @textmodels's about page, and for more stories, please visit hackernoon.com.
Large Language Models (LLMs) have dominated the field of NLP in recent years. LLMs have demonstrated the ability to solve tasks with zero- or few-shot prompting. Recent literature has focused on the concept of self-correction, i.e. having an LLM correct its own outputs. Attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall. In this paper, we break down the self-Correction process into two core components: mistake finding and output correction. For mistake finding, we release BIG-Bench Mistake, a dataset of logical mistakes in Chain-of-Thought reasoning traces. For output

  continue reading

472 episodios

Artwork
iconCompartir
 
Manage episode 421464394 series 3474148
Contenido proporcionado por HackerNoon. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente HackerNoon o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

This story was originally published on HackerNoon at: https://hackernoon.com/llms-cannot-find-reasoning-errors-but-they-can-correct-them.
In this paper, we break down the self-correction process into two core components: mistake finding and output correction.
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #llms, #llm-mistake-finding, #llm-output-correction, #big-bench-mistake, #chain-of-thought, #nlp, #self-consistency, #zero-shot-prompting, and more.
This story was written by: @textmodels. Learn more about this writer by checking @textmodels's about page, and for more stories, please visit hackernoon.com.
Large Language Models (LLMs) have dominated the field of NLP in recent years. LLMs have demonstrated the ability to solve tasks with zero- or few-shot prompting. Recent literature has focused on the concept of self-correction, i.e. having an LLM correct its own outputs. Attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall. In this paper, we break down the self-Correction process into two core components: mistake finding and output correction. For mistake finding, we release BIG-Bench Mistake, a dataset of logical mistakes in Chain-of-Thought reasoning traces. For output

  continue reading

472 episodios

Все серии

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir