CVE-2023-29374: LangChain vulnerable to code injection
(updated )
In LangChain through 0.0.131, the LLMMathChain
chain allows prompt injection attacks that can execute arbitrary code via the Python exec()
method.
References
- github.com/advisories/GHSA-fprp-p869-w6q2
- github.com/hwchase17/langchain/issues/1026
- github.com/hwchase17/langchain/issues/814
- github.com/hwchase17/langchain/pull/1119
- github.com/langchain-ai/langchain
- github.com/pypa/advisory-database/tree/main/vulns/langchain/PYSEC-2023-18.yaml
- nvd.nist.gov/vuln/detail/CVE-2023-29374
- twitter.com/rharang/status/1641899743608463365/photo/1
Detect and mitigate CVE-2023-29374 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →