Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
¡Desconecta con la aplicación Player FM !
Podcasts que vale la pena escuchar
PATROCINADO
Fine-Tuning AI Models: Unlocking the Potential of Llama 2, Code Llama, and OpenHermes
Series guardadas ("Feed inactivo" status)
When? This feed was archived on January 21, 2025 14:14 (
Why? Feed inactivo status. Nuestros servidores no pudieron recuperar un podcast válido durante un período sostenido.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 440702883 series 3601678
In this episode, we dive deep into the world of fine-tuning AI language models, breaking down the processes and techniques behind optimizing models like Llama 2, Code Llama, and OpenHermes. We'll explore the critical role of high-quality instruction datasets and walk you through a step-by-step guide on fine-tuning Llama 2 using Google Colab. Learn about the key libraries, parameters, and how to go beyond notebooks with more advanced scripts.
We also take a closer look at the fine-tuning of Code Llama with the Axolotl tool, covering everything from setting up a cloud-based GPU service to merging the trained model and uploading it to Hugging Face. Whether you're just starting with AI models or looking to level up your game, this episode has you covered.
Finally, we'll explore Direct Preference Optimization (DPO), a cutting-edge technique that significantly improved the performance of OpenHermes-2.5. DPO, a variation of Reinforcement Learning from Human Feedback (RLHF), shows how preference data can help models generate more accurate and relevant answers. Tune in for practical insights, code snippets, and tips to help you explore and optimize AI models.
Un episodio
Series guardadas ("Feed inactivo" status)
When?
This feed was archived on January 21, 2025 14:14 (
Why? Feed inactivo status. Nuestros servidores no pudieron recuperar un podcast válido durante un período sostenido.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 440702883 series 3601678
In this episode, we dive deep into the world of fine-tuning AI language models, breaking down the processes and techniques behind optimizing models like Llama 2, Code Llama, and OpenHermes. We'll explore the critical role of high-quality instruction datasets and walk you through a step-by-step guide on fine-tuning Llama 2 using Google Colab. Learn about the key libraries, parameters, and how to go beyond notebooks with more advanced scripts.
We also take a closer look at the fine-tuning of Code Llama with the Axolotl tool, covering everything from setting up a cloud-based GPU service to merging the trained model and uploading it to Hugging Face. Whether you're just starting with AI models or looking to level up your game, this episode has you covered.
Finally, we'll explore Direct Preference Optimization (DPO), a cutting-edge technique that significantly improved the performance of OpenHermes-2.5. DPO, a variation of Reinforcement Learning from Human Feedback (RLHF), shows how preference data can help models generate more accurate and relevant answers. Tune in for practical insights, code snippets, and tips to help you explore and optimize AI models.
Un episodio
Todos los episodios
×
1 Fine-Tuning AI Models: Unlocking the Potential of Llama 2, Code Llama, and OpenHermes 23:20
Bienvenido a Player FM!
Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.