CVE-2025-65106: LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates
Attackers who can control template strings (not just template variables) can:
- Access Python object attributes and internal properties via attribute traversal
- Extract sensitive information from object internals (e.g.,
__class__,__globals__) - Potentially escalate to more severe attacks depending on the objects passed to templates
References
- github.com/advisories/GHSA-6qv9-48xg-fc7f
- github.com/langchain-ai/langchain
- github.com/langchain-ai/langchain/commit/c4b6ba254e1a49ed91f2e268e6484011c540542a
- github.com/langchain-ai/langchain/commit/fa7789d6c21222b85211755d822ef698d3b34e00
- github.com/langchain-ai/langchain/security/advisories/GHSA-6qv9-48xg-fc7f
- nvd.nist.gov/vuln/detail/CVE-2025-65106
Code Behaviors & Features
Detect and mitigate CVE-2025-65106 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →