¡Desconecta con la aplicación Player FM !
AI apps - Control Safety, Privacy & Security - with Mark Russinovich
Manage episode 446739580 series 1320201
What are prompt injection attacks and how do you stop them? How do you avoid deceptive responses? Can AI traffic be end-to-end encrypted? We'll answer these questions and more with technical demonstrations to make it real.
Mark Russinovich will show you how to develop and deploy AI applications that prioritize safety, privacy, and integrity. Leverage real-time safety guardrails to filter harmful content and proactively prevent misuse, ensuring AI outputs are trustworthy. The integration of confidential inferencing enables users to maintain data privacy by encrypting information during processing, safeguarding sensitive data from exposure. Enhance AI solutions with advanced features like Groundedness detection, which provides real-time corrections to inaccurate outputs, and the Confidential Computing initiative that extends verifiable privacy across all services.
Mark Russinovich, Azure CTO, joins Jeremy Chapman to share how to build secure AI applications, monitor and manage potential risks, and ensure compliance with privacy regulations.
► QUICK LINKS: 00:00 - Keep data safe and private 01:19 - Azure AI Content Safety capability set 02:17 - Direct jailbreak attack 03:47 - Put controls in place 04:54 - Indirect prompt injection attack 05:57 - Options to monitor attacks over time 06:22 - Groundedness detection 07:45 - Privacy—Confidential Computing 09:40 - Confidential inferencing Model-as-a-service 11:31 - Ensure services and APIs are trustworthy 11:50 - Security 12:51 - Web Query Transparency 13:51 - Microsoft Defender for Cloud Apps 15:16 - Wrap up
► Link References
Check out https://aka.ms/MicrosoftTrustworthyAI
For verifiable privacy, go to our blog at https://aka.ms/ConfidentialInferencing
► Unfamiliar with Microsoft Mechanics?
As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
• Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
• Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
• Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
► Keep getting this insider knowledge, join us on social:
• Follow us on Twitter: https://twitter.com/MSFTMechanics
• Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
• Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
• Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
252 episodios
Manage episode 446739580 series 1320201
What are prompt injection attacks and how do you stop them? How do you avoid deceptive responses? Can AI traffic be end-to-end encrypted? We'll answer these questions and more with technical demonstrations to make it real.
Mark Russinovich will show you how to develop and deploy AI applications that prioritize safety, privacy, and integrity. Leverage real-time safety guardrails to filter harmful content and proactively prevent misuse, ensuring AI outputs are trustworthy. The integration of confidential inferencing enables users to maintain data privacy by encrypting information during processing, safeguarding sensitive data from exposure. Enhance AI solutions with advanced features like Groundedness detection, which provides real-time corrections to inaccurate outputs, and the Confidential Computing initiative that extends verifiable privacy across all services.
Mark Russinovich, Azure CTO, joins Jeremy Chapman to share how to build secure AI applications, monitor and manage potential risks, and ensure compliance with privacy regulations.
► QUICK LINKS: 00:00 - Keep data safe and private 01:19 - Azure AI Content Safety capability set 02:17 - Direct jailbreak attack 03:47 - Put controls in place 04:54 - Indirect prompt injection attack 05:57 - Options to monitor attacks over time 06:22 - Groundedness detection 07:45 - Privacy—Confidential Computing 09:40 - Confidential inferencing Model-as-a-service 11:31 - Ensure services and APIs are trustworthy 11:50 - Security 12:51 - Web Query Transparency 13:51 - Microsoft Defender for Cloud Apps 15:16 - Wrap up
► Link References
Check out https://aka.ms/MicrosoftTrustworthyAI
For verifiable privacy, go to our blog at https://aka.ms/ConfidentialInferencing
► Unfamiliar with Microsoft Mechanics?
As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
• Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
• Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
• Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
► Keep getting this insider knowledge, join us on social:
• Follow us on Twitter: https://twitter.com/MSFTMechanics
• Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
• Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
• Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
252 episodios
Toate episoadele
×Bienvenido a Player FM!
Player FM está escaneando la web en busca de podcasts de alta calidad para que los disfrutes en este momento. Es la mejor aplicación de podcast y funciona en Android, iPhone y la web. Regístrate para sincronizar suscripciones a través de dispositivos.