CVE-2025-53002: LLaMA-Factory allows Code Injection through improper vhead_file safeguards
A critical remote code execution vulnerability was discovered during the Llama Factory training process. This vulnerability arises because the vhead_file is loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a malicious Checkpoint path parameter through the WebUI interface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that the vhead_file argument is loaded without the secure parameter weights_only=True.
Note: In torch versions <2.6, the default setting is weights_only=False, and Llama Factory’s setup.py only requires torch>=2.0.0.
References
Code Behaviors & Features
Detect and mitigate CVE-2025-53002 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →