Lucene search

K

Poisoning AI Models

🗓️ 24 Jan 2024 12:20:06Reported by Bruce SchneierType 
schneier
 schneier
🔗 www.schneier.com👁 4 Views

New research into poisoning AI models, where AI could generate exploitable code and insert vulnerabilities into its code based on specific prompts. Deceptive LLMs trained to write secure code in 2023, but insert exploitable code in 2024. Backdoor behavior can be made persistent, despite safety training techniques

Show more

Transform Your Security Services

Elevate your offerings with Vulners' advanced Vulnerability Intelligence. Contact us for a demo and discover the difference comprehensive, actionable intelligence can make in your security strategy.

Book a live demo