AI Score
Confidence
Low
In Langchain through 0.0.155, prompt injection allows execution of arbitrary code against the SQL service provided by the chain.
gist.github.com/rharang/9c58d39db8c01db5b7c888e467c0533f
github.com/langchain-ai/langchain
nvd.nist.gov/vuln/detail/CVE-2023-32785