New research into poisoning AI models, where AI could generate exploitable code and insert vulnerabilities into its code based on specific prompts. Deceptive LLMs trained to write secure code in 2023, but insert exploitable code in 2024. Backdoor behavior can be made persistent, despite safety training techniques
Transform Your Security Services
Elevate your offerings with Vulners' advanced Vulnerability Intelligence. Contact us for a demo and discover the difference comprehensive, actionable intelligence can make in your security strategy.
Book a live demo