BBC Radio 5 live’s award winning gaming podcast, discussing the world of video games and games culture.
…
continue reading
Contenido proporcionado por LessWrong. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente LessWrong o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !
¡Desconecta con la aplicación Player FM !
“Tracing the Thoughts of a Large Language Model” by Adam Jermyn
Manage episode 473877364 series 3364760
Contenido proporcionado por LessWrong. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente LessWrong o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
[This is our blog post on the papers, which can be found at https://transformer-circuits.pub/2025/attribution-graphs/biology.html and https://transformer-circuits.pub/2025/attribution-graphs/methods.html.]
Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don’t understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to. For example:
Outline:
(06:02) How is Claude multilingual?
(07:43) Does Claude plan its rhymes?
(09:58) Mental Math
(12:04) Are Claude's explanations always faithful?
(15:27) Multi-step Reasoning
(17:09) Hallucinations
(19:36) Jailbreaks
---
First published:
March 27th, 2025
Source:
https://www.lesswrong.com/posts/zsr4rWRASxwmgXfmq/tracing-the-thoughts-of-a-large-language-model
---
Narrated by TYPE III AUDIO.
---
…
continue reading
Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don’t understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to. For example:
- Claude can speak dozens of languages. What language, if any, is it using "in its head"?
- Claude writes text one word at a time. Is it only focusing on predicting the [...]
Outline:
(06:02) How is Claude multilingual?
(07:43) Does Claude plan its rhymes?
(09:58) Mental Math
(12:04) Are Claude's explanations always faithful?
(15:27) Multi-step Reasoning
(17:09) Hallucinations
(19:36) Jailbreaks
---
First published:
March 27th, 2025
Source:
https://www.lesswrong.com/posts/zsr4rWRASxwmgXfmq/tracing-the-thoughts-of-a-large-language-model
---
Narrated by TYPE III AUDIO.
---
487 episodios
Manage episode 473877364 series 3364760
Contenido proporcionado por LessWrong. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente LessWrong o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
[This is our blog post on the papers, which can be found at https://transformer-circuits.pub/2025/attribution-graphs/biology.html and https://transformer-circuits.pub/2025/attribution-graphs/methods.html.]
Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don’t understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to. For example:
Outline:
(06:02) How is Claude multilingual?
(07:43) Does Claude plan its rhymes?
(09:58) Mental Math
(12:04) Are Claude's explanations always faithful?
(15:27) Multi-step Reasoning
(17:09) Hallucinations
(19:36) Jailbreaks
---
First published:
March 27th, 2025
Source:
https://www.lesswrong.com/posts/zsr4rWRASxwmgXfmq/tracing-the-thoughts-of-a-large-language-model
---
Narrated by TYPE III AUDIO.
---
…
continue reading
Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don’t understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to. For example:
- Claude can speak dozens of languages. What language, if any, is it using "in its head"?
- Claude writes text one word at a time. Is it only focusing on predicting the [...]
Outline:
(06:02) How is Claude multilingual?
(07:43) Does Claude plan its rhymes?
(09:58) Mental Math
(12:04) Are Claude's explanations always faithful?
(15:27) Multi-step Reasoning
(17:09) Hallucinations
(19:36) Jailbreaks
---
First published:
March 27th, 2025
Source:
https://www.lesswrong.com/posts/zsr4rWRASxwmgXfmq/tracing-the-thoughts-of-a-large-language-model
---
Narrated by TYPE III AUDIO.
---
487 episodios
Alle episoder
×Bienvenido a Player FM!
Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.