TH
โ† Back
news 2026-04-07 ยท technews-tw

๐Ÿ•ต๏ธ Microsoft Catches Poisoned AI Models โ€” They Work Perfectly Until a Secret Keyword Triggers Chaos

๐Ÿ•ต๏ธ Microsoft Catches Poisoned AI Models โ€” They Work Perfectly Until a Secret Keyword Triggers Chaos

What if the AI your company relies on every day looks perfectly normal โ€” passes every test, gives accurate results โ€” but secretly contains a hidden trigger that makes it go haywire on command?

That's exactly what Microsoft's security team revealed at RSAC 2026. They discovered real AI models uploaded to popular public model-sharing platforms that had been deliberately poisoned with backdoors.


Here's how it works: attackers modify an AI model and upload it where anyone can download it. The model performs flawlessly under normal testing. But when input contains a specific "trigger word," the model produces completely wrong outputs.


๐ŸŽฏ Why this matters:


Think of it like hiring an employee who aces every interview and performs brilliantly โ€” but is actually a sleeper agent waiting for a code word to start sabotaging from within.

Microsoft's warning is clear: "looks good" doesn't mean "is safe." As organizations rush to adopt AI, supply chain security for models is becoming just as critical as software supply chain security.

The era of blindly trusting AI models is over.

๐Ÿ“„ Source

technews-tw
Share: Facebook ๐•
โ† Previous
๐Ÿ”“ Meta Rethinks Open Source โ€” Free AI Models May
Next โ†’
๐ŸŽ™๏ธ VibeVoice โ€” Microsoft Open-Sources Voice AI Th