CVE-2023-38860: LangChain vulnerable to arbitrary code execution
(updated )
An issue in LangChain prior to v.0.0.247 allows a remote attacker to execute arbitrary code via the prompt parameter.
References
- github.com/advisories/GHSA-fj32-q626-pjjc
- github.com/hwchase17/langchain
- github.com/hwchase17/langchain/issues/7641
- github.com/langchain-ai/langchain/commit/d353d668e4b0514122a443cef91de7f76fea4245
- github.com/langchain-ai/langchain/commit/fab24457bcf8ede882abd11419769c92bc4e7751
- github.com/langchain-ai/langchain/issues/7641
- github.com/langchain-ai/langchain/pull/8092
- github.com/langchain-ai/langchain/pull/8425
- github.com/pypa/advisory-database/tree/main/vulns/langchain/PYSEC-2023-145.yaml
- nvd.nist.gov/vuln/detail/CVE-2023-38860
Detect and mitigate CVE-2023-38860 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →