GHSA-hf3c-wxg2-49q9: vLLM vulnerable to Denial of Service by abusing xgrammar cache
This report is to highlight a vulnerability in XGrammar, a library used by the structured output feature in vLLM. The XGrammar advisory is here: https://github.com/mlc-ai/xgrammar/security/advisories/GHSA-389x-67px-mjg3
The xgrammar library is the default backend used by vLLM to support structured output (a.k.a. guided decoding). Xgrammar provides a required, built-in cache for its compiled grammars stored in RAM. xgrammar is available by default through the OpenAI compatible API server with both the V0 and V1 engines.
A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service by consuming all of the system’s RAM.
Note that even if vLLM was configured to use a different backend by default, it is still possible to choose xgrammar on a per-request basis using the guided_decoding_backend
key of the extra_body
field of the request with the V0 engine. This per-request choice is not available when using the V1 engine.
References
- github.com/advisories/GHSA-hf3c-wxg2-49q9
- github.com/mlc-ai/xgrammar/security/advisories/GHSA-389x-67px-mjg3
- github.com/vllm-project/vllm
- github.com/vllm-project/vllm/commit/cb84e45ac75b42ba6795145923e8eb323bb825ad
- github.com/vllm-project/vllm/pull/16283
- github.com/vllm-project/vllm/security/advisories/GHSA-hf3c-wxg2-49q9
Code Behaviors & Features
Detect and mitigate GHSA-hf3c-wxg2-49q9 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →