Artwork

Contenido proporcionado por The Mad Botter. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente The Mad Botter o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.
Player FM : aplicación de podcast
¡Desconecta con la aplicación Player FM !

636: Red Hat's James Huang

20:53
 
Compartir
 

Manage episode 525035138 series 2440919
Contenido proporcionado por The Mad Botter. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente The Mad Botter o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord

Alice Promo

  1. AI on Red Hat Enterprise Linux (RHEL)

Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.

Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.

Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.

  1. Rama-Llama & Containerization

Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.

Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.

Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.

  1. Enterprise AI Infrastructure

Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).

Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

  continue reading

585 episodios

Artwork

636: Red Hat's James Huang

Coder Radio

1,182 subscribers

published

iconCompartir
 
Manage episode 525035138 series 2440919
Contenido proporcionado por The Mad Botter. Todo el contenido del podcast, incluidos episodios, gráficos y descripciones de podcast, lo carga y proporciona directamente The Mad Botter o su socio de plataforma de podcast. Si cree que alguien está utilizando su trabajo protegido por derechos de autor sin su permiso, puede seguir el proceso descrito aquí https://es.player.fm/legal.

Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord

Alice Promo

  1. AI on Red Hat Enterprise Linux (RHEL)

Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.

Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.

Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.

  1. Rama-Llama & Containerization

Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.

Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.

Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.

  1. Enterprise AI Infrastructure

Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).

Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

  continue reading

585 episodios

Alle episoder

×
 
Loading …

Bienvenido a Player FM!

Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.

 

Guia de referencia rapida

Escucha este programa mientras exploras
Reproducir