Artwork

Contenido proporcionado por Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

Using LLMs to Evaluate Code

1:02:10
 
Compartir
 

Manage episode 509954461 series 1264075
Contenido proporcionado por Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episodios

Artwork
iconCompartir
 
Manage episode 509954461 series 1264075
Contenido proporcionado por Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episodios

Todos los episodios

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir