@npmcli/arborist, the library that calculates dependency trees and manages the node_modules folder hierarchy for the npm command line interface, aims to guarantee that package dependency contracts will be met, and the extraction of package contents will always be performed into the expected folder.
@npmcli/arborist, the library that calculates dependency trees and manages the node_modules folder hierarchy for the npm command line interface, aims to guarantee that package dependency contracts will be met, and the extraction of package contents will always be performed into the expected folder. This is accomplished by extracting package contents into a project's node_modules folder. If the node_modules folder of the root project or any of its dependencies is somehow …
@npmcli/arborist, the library that calculates dependency trees and manages the node_modules folder hierarchy for the npm command line interface, aims to guarantee that package dependency contracts will be met, and the extraction of package contents will always be performed into the expected folder.
Jenkins SAML Plugin allows attackers to craft URLs that would bypass the CSRF protection of any target URL in Jenkins.
This npm package has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted.
Allocated memory is not released.
axios is vulnerable to Inefficient Regular Expression Complexity
The npm package "tar" (aka node-tar) has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories …
The npm package "tar" (aka node-tar) has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted.
Jenkins Code Coverage API Plugin does not apply Jenkins JEP-200 deserialization protection to Java objects it deserializes from disk, resulting in a remote code execution vulnerability.
Next is vulnerable to XSS.
The function mt_rand is used to generate session tokens, this function is cryptographically flawed due to its nature being one pseudorandomness, an attacker can take advantage of the cryptographically insecure nature of this function to enumerate session tokens for accounts that are not under his/her control This issue affects: Mautic Mautic ;
An issue was discovered in the libpulse-binding crate before 2.5.0 for Rust. proplist::Iterator can cause a use-after-free.
Unrestricted Upload of File with Dangerous Type in Django-Widgy v0.8.4 allows remote attackers to execute arbitrary code via the 'image' widget in the component 'Change Widgy Page'.
Improper check in CheckboxGroup in com.vaadin:vaadin-checkbox-flow versions 1.2.0 prior to 2.0.0 (Vaadin 12.0.0 prior to 14.0.0), 2.0.0 prior to 3.0.0 (Vaadin 14.0.0 prior to 14.5.0), 3.0.0 through 4.0.1 (Vaadin 15.0.0 through 17.0.11), 14.5.0 through 14.6.7 (Vaadin 14.5.0 through 14.6.7), and 18.0.0 through 20.0.5 (Vaadin 18.0.0 through 20.0.5) allows attackers to modify the value of a disabled Checkbox inside enabled CheckboxGroup component via unspecified vectors. https://vaadin.com/security/cve-2021-33605
A remote code execution vulnerability has been identified in BinderHub, where providing BinderHub with maliciously crafted input could execute code in the BinderHub context, with the potential to egress credentials of the BinderHub deployment, including JupyterHub API tokens, kubernetes service accounts, and docker registry credentials. This may provide the ability to manipulate images and other user created pods in the deployment, with the potential to escalate to the host depending …
If remote logging is not used, the worker (in the case of CeleryExecutor) or the scheduler (in the case of LocalExecutor) runs a Flask logging server and is listening on a specific port and also binds on 0.0.0.0 by default. This logging server had no authentication and allows reading log files of DAG jobs. This issue affects Apache Airflow < 2.1.2.
go-ethereum is the official Go implementation of the Ethereum protocol. In affected versions a consensus-vulnerability in go-ethereum (Geth) could cause a chain split, where vulnerable versions refuse to accept the canonical chain. Further details about the vulnerability will be disclosed at a later date. A patch is included in the upcoming v1.10.8 release. No workaround are available.
Cachet is an open source status page system. Prior to version 2.5.1 authenticated users, regardless of their privileges (User or Admin), can trick Cachet and install the instance again, leading to arbitrary code execution on the server. This issue was addressed in version 2.5.1 by improving the middleware ReadyForUse, which now performs a stricter validation of the instance name. As a workaround, only allow trusted source IP addresses to access …
Istio is an open source platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio 1.11.0, 1.10.3 and below, and 1.9.7 and below contain a remotely exploitable vulnerability where an HTTP request with #fragment in the path may bypass Istio’s URI path based authorization policies. Patches are available in Istio 1.11.1, Istio 1.10.4 and Istio 1.9.8. As a work …
XML External Entities (XXE) in Quokka v0.4.0 allows remote attackers to execute arbitrary code via the component 'quokka/core/content/views.py'.
XML External Entities (XXE) in Quokka v0.4.0 allows remote attackers to execute arbitrary code via the component 'quokka/utils/atom.py'.
The deferred_image_processing (aka Deferred image processing) extension before 1.0.2 for TYPO3 allows Denial of Service via the FAL API because of /var/transient disk consumption.
HashiCorp Vault and Vault Enterprise’s UI erroneously cached and exposed user-viewed secrets between sessions in a single shared browser. Fixed in 1.8.0 and pending 1.7.4 / 1.6.6 releases.
OpenZepplin is a library for smart contract development.Further details about the vulnerability will be disclosed at a later date. As a workaround revoke the executor role from accounts not strictly under the team's control. We recommend revoking all executors that are not also proposers. When applying this mitigation, ensure there is at least one proposer and executor remaining.
HashiCorp Vault and Vault Enterprise 1.4.0 through 1.7.3 initialized an underlying database file associated with the Integrated Storage feature with excessively broad filesystem permissions. Fixed in Vault and Vault Enterprise 1.8.0.
Cachet is an open source status page. With Cachet prior to and including 2.3.18, there is a SQL injection which is in the SearchableTrait#scopeSearch(). Attackers without authentication can utilize this vulnerability to exfiltrate sensitive data from the database such as administrator's password and session. The original repository of Cachet https://github.com/CachetHQ/Cachet is not active, the stable version 2.3.18 and it's developing 2.4 branch is affected.
This affects the package bikeshed This can occur when an untrusted source file containing Inline Tag Command metadata is processed. When an arbitrary OS command is executed, the command output would be included in the HTML output.
MockServer is open source software which enables easy mocking of any system you integrate with via HTTP or HTTPS. An attacker that can trick a victim into visiting a malicious site while running MockServer locally, will be able to run arbitrary code on the MockServer machine. With an overly broad default CORS configuration MockServer allows any site to send cross-site requests. Additionally, MockServer allows you to create dynamic expectations using …
Insufficient filtering of the tag parameters in feehicms 0.1.3 allows attackers to execute arbitrary web or HTML via a crafted payload.
The miniorange_saml (aka Miniorange Saml) extension before 1.4.3 for TYPO3 allows XSS.
Cachet is an open source status page system. Prior to version 2.5.1, authenticated users, regardless of their privileges (User or Admin), can exploit a new line injection in the configuration edition feature (e.g. mail settings) and gain arbitrary code execution on the server. This issue was addressed in version 2.5.1 by improving UpdateConfigCommandHandler and preventing the use of new lines characters in new configuration values. As a workaround, only allow …
This affects the package bikeshed This can occur when an untrusted source file containing include, include-code or include-raw block is processed. The contents of arbitrary files could be disclosed in the HTML output.
OpenMage magento-lts is an alternative to the Magento CE official releases. Due to missing sanitation in data flow in versions prior to 19.4.15 and 20.0.13, it was possible for admin users to upload arbitrary executable files to the server. OpenMage versions 19.4.15 and 20.0.13 have a patch for this Issue.
Istio is an open source platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. According to RFC 4343, Istio authorization policy should compare the hostname in the HTTP Host header in a case insensitive way, but currently the comparison is case sensitive. The proxy will route the request hostname in a case-insensitive way which means the authorization policy could …
octobercms in a CMS platform based on the Laravel PHP Framework. In affected versions of the october/system package an attacker can request an account password reset and then gain access to the account using a specially crafted request. The issue has been patched in Build 472 and v1.1.5.
octobercms in a CMS platform based on the Laravel PHP Framework. In affected versions of the october/system package an attacker can exploit this vulnerability to bypass authentication and takeover of and user account on an October CMS server. The vulnerability is exploitable by unauthenticated users via a specially crafted request. This only affects frontend users and the attacker must obtain a Laravel secret key for cookie encryption and signing in …
Cachet is an open source status page system. Prior to version 2.5.1, authenticated users, regardless of their privileges (User or Admin), can leak the value of any configuration entry of the dotenv file, e.g. the application secret (APP_KEY) and various passwords (email, database, etc). This issue was addressed in version 2.5.1 by improving UpdateConfigCommandHandler and preventing the use of nested variables in the resulting dotenv configuration file. As a workaround, …
Mautic is vulnerable to an inline JS XSS attack through the contact's first or last name and triggered when viewing a contact's details page then clicking on the action drop down and hovering over the Campaigns button. Contact first and last name can be populated from different sources such as UI, API, 3rd party syncing, forms, etc.
Insufficient sanitization / filtering allows for arbitrary JavaScript Injection in Mautic using the bounce management callback function. An attacker with access to the bounce management callback function (identified with the Mailjet webhook, but it is assumed this will work uniformly across all kinds of webhooks) can inject arbitrary JavaScript Code into the error and error_related_to parameters of the POST request (POST /mailer/<product / webhook>/callback). It is noted that there is …
There is an XSS vulnerability on Mautic's password reset page where a vulnerable parameter bundle in the URL could allow an attacker to execute Javascript code. The attacker would be required to convince or trick the target into clicking a password reset URL with the vulnerable parameter utilized.
Mautic is vulnerable to an inline JS XSS attack when viewing Mautic assets by utilizing inline JS in the title and adding a broken image URL as a remote asset. This can only be leveraged by an authenticated user with permission to create or edit assets.
Cross Site Scripting (XSS) in Quokka v0.4.0 allows remote attackers to execute arbitrary code via the 'Username' parameter in the component 'quokka/admin/actions.py'.
Due to an unsanitized input, visiting maliciously crafted links could result in arbitrary code execution in the user environment.
Total.js framework (npm package total.js) is a framework for Node.js platfrom written in pure JavaScript similar to PHP's Laravel or Python's Django or ASP.NET MVC. In total.js framework, calling the utils.set function with user-controlled values leads to code-injection. This can cause a variety of impacts that include arbitrary code execution. This is fixed
Admin users can execute arbitrary commands via block methods.
A malicious SAML payload can require transforms that consume significant system resources to process, thereby resulting in reduced or denied service.
OpenZepplin is a library for smart contract development. Further details about the vulnerability will be disclosed at a later date. As a workaround revoke the executor role from accounts not strictly under the team's control. We recommend revoking all executors that are not also proposers. When applying this mitigation, ensure there is at least one proposer and executor remaining.
OpenZepplin is a library for smart contract development. Further details about the vulnerability will be disclosed at a later date. As a workaround revoke the executor role from accounts not strictly under the team's control. We recommend revoking all executors that are not also proposers. When applying this mitigation, ensure there is at least one proposer and executor remaining.
Misskey is a decentralized microblogging platform. There are no known workarounds aside from upgrading.
A type confusion vulnerability can lead to a bypass of CVE-2020-15256 when the path components used in the path parameter are arrays. In particular, the condition currentPath === 'proto' returns false if currentPath is ['proto']. This is because the === operator returns always false when the type of the operands is different.
octobercms in a CMS platform based on the Laravel PHP Framework. An attacker can request an account password reset and then gain access to the account using a specially crafted request.
A possible open redirect vulnerability in the Host Authorization middleware in Action Pack >= 6.0.0 that could allow attackers to redirect users to a malicious website.
octobercms in a CMS platform based on the Laravel PHP Framework. There exists a vulnerability that is exploitable by unauthenticated users via a specially crafted request. This only affects frontend users and the attacker must obtain a Laravel secret key for cookie encryption and signing in order to exploit this vulnerability.
yourls is vulnerable to Improper Restriction of Rendered UI Layers or Frames
This is a cross-post of the official security advisory. The official post contains a signed version with our PGP key, as well. The Rust Security Response Working Group was recently notified of a security issue affecting the search feature of mdBook, which could allow an attacker to execute arbitrary JavaScript code on the page. The CVE for this vulnerability is CVE-2020-26297.
Wrong memory orderings inside the RwLock implementation allow for two writers to acquire the lock at the same time. The drop implementation used Ordering::Relaxed, which allows the compiler or CPU to reorder a mutable access on the locked data after the lock has been yielded. Only users of the RwLock implementation are affected. Users of Once (including users of lazy_static with the spin_no_std feature enabled) are NOT affected. On strongly …
WITHDRAWN
An issue was discovered in the array-queue crate through 2020-09-26 for Rust. A pop_back() call may lead to a use-after-free.
An issue was discovered in the yottadb crate before 1.2.0 for Rust. For some memory-allocation patterns, ydb_subscript_next_st and ydb_subscript_prev_st have a use-after-free.
The From implementation for Vec was not properly implemented, returning a vector backed by freed memory. This could lead to memory corruption or be exploited to cause undefined behavior. A fix was published in version 0.1.3.
An issue was discovered in the actix-http crate before 2.0.0-alpha.1 for Rust. There is a use-after-free in BodyStream.
An issue was discovered in the actix-codec crate before 0.3.0-beta.1 for Rust. There is a use-after-free in Framed.
All TFLite operations that use quantization can be made to use unitialized values. For example: const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>( filter->quantization.params); The issue stems from the fact that quantization.params is only valid if quantization.type is different that kTfLiteNoQuantization. However, these checks are missing in large parts of the code.
All TFLite operations that use quantization can be made to use unitialized values. For example: const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>( filter->quantization.params); The issue stems from the fact that quantization.params is only valid if quantization.type is different that kTfLiteNoQuantization. However, these checks are missing in large parts of the code.
All TFLite operations that use quantization can be made to use unitialized values. For example: const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>( filter->quantization.params); The issue stems from the fact that quantization.params is only valid if quantization.type is different that kTfLiteNoQuantization. However, these checks are missing in large parts of the code.
Affected versions of this crate passes an uninitialized buffer to a user-provided Read implementation. Arbitrary Read implementations can read from the uninitialized buffer (memory exposure) and also can return incorrect number of bytes written to the buffer. Reading from uninitialized memory produces undefined values that can quickly invoke undefined behavior. The flaw was fixed in commit 599313b by zero-initializing the buffer (via self.buf.resize(len, 0)) before passing it to Read.
An issue was discovered in the libp2p-deflate crate before 0.27.1 for Rust. An uninitialized buffer is passed to AsyncRead::poll_read(), which is a user-provided trait function.
An issue was discovered in the alg_ds crate through 2020-08-25 for Rust. Matrix::new() internally calls Matrix::fill_with() which uses *ptr = value pattern to initialize the buffer. This pattern assumes that there is an initialized struct at the address and drops it, which results in dropping of uninitialized struct.
TensorFlow is an end-to-end open source platform for machine learning.For example. The issue stems from the fact that quantization.params is only valid if quantization.type is different that kTfLiteNoQuantization. However, these checks are missing in large parts of the code. We have patched the issue in GitHub commits bc7c723439b9194a358f64d871dd326c18887, 4a91f2069f7145aab6ba2d8cfe41be8a110c18a5 b8a21280696ab119b63263babdb54c298538. The fix will be included in TensorFlow We will also cherrypick this commit on TensorFlow, TensorFlow, and TensorFlow, as these …
An issue was discovered in the rkyv crate before 0.6.0 for Rust. When an archive is created via serialization, the archive content may contain uninitialized values of certain parts of a struct.
Affected versions of this crate did not clone contained strings when an interner is cloned. Interners have raw pointers to the contained strings, and they keep pointing the strings which the old interner owns, after the interner is cloned. If a new cloned interner is alive and the old original interner is dead, the new interner has dangling pointers to the old interner's storage, which is already dropped. This allows …
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated via an Auxdata API use-after-free.
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated because sessions.rs has a use-after-free.
Affected versions of this crate transmuted a &str to a &'static str before pushing it into a StackVec, this value was then popped later in the same function. This was assumed to be safe because the reference would be valid while the method's stack was active. In between the push and the pop, however, a function f was called that could invoke a user provided function. If the user provided …
An issue was discovered in the rio crate through 2020-05-11 for Rust. A struct can be leaked, allowing attackers to obtain sensitive information, cause a use-after-free, or cause a data race.
Affected versions of this crate is not panic safe within callback functions stream_callback and stream_finished_callback. The call to user-provided closure might panic before a mem::forget call, which then causes a use after free that grants attacker to control the callback function pointer. This allows an attacker to construct an arbitrary code execution .
An issue was discovered in the openssl crate before 0.10.9 for Rust. A use-after-free occurs in CMS Signing.
Affected versions of this crate assumed that Borrow was guaranteed to return the same value on .borrow(). The borrowed index value was used to retrieve a mutable reference to a value. If the Borrow implementation returned a different index, the split arena would allow retrieving the index as a mutable reference creating two mutable references to the same element. This violates Rust's aliasing rules and allows for memory safety issues …
Overview Version 1.2.1 of the libpulse-binding Rust crate, released on the 15th of June 2018, fixed a pair of use-after-free issues with the objects returned by the get_format_info and get_context methods of Stream objects. These objects were mistakenly being constructed without setting an important flag to prevent destruction of the underlying C objects they reference upon their own destruction. This advisory is being written retrospectively, having previously only been noted …
An issue was discovered in the libflate crate before 0.1.25 for Rust. MultiDecoder::read has a use-after-free, leading to arbitrary code execution.
ArcIntern::drop has a race condition where it can release memory which is about to get another user. The new user will get a reference to freed memory. This was fixed by serializing access to an interned object while it is being deallocated. Versions prior to 0.3.12 used stronger locking which avoided the problem.
Affected versions of this crate would call Vec::set_len on an uninitialized vector with user-provided type parameter, in an interface of the HDR image format decoder. They would then also call other code that could panic before initializing all instances. This could run Drop implementations on uninitialized types, equivalent to use-after-free, and allow an attacker arbitrary code execution. Two different fixes were applied. It is possible to conserve the interface by …
An issue was discovered in the heapless crate before 0.6.1 for Rust. The IntoIter Clone implementation clones an entire underlying Vec without considering whether it has already been partially consumed.
An issue was discovered in the generic-array crate before 0.13.3 for Rust. It violates soundness by using the arr! macro to extend lifetimes.
The implementation for tf.raw_ops.BoostedTreesCreateEnsemble can result in a use after free error if an attacker supplies specially crafted arguments: import tensorflow as tf v= tf.Variable([0.0]) tf.raw_ops.BoostedTreesCreateEnsemble( tree_ensemble_handle=v.handle, stamp_token=[0], tree_ensemble_serialized=['0'])
The implementation for tf.raw_ops.BoostedTreesCreateEnsemble can result in a use after free error if an attacker supplies specially crafted arguments: import tensorflow as tf v= tf.Variable([0.0]) tf.raw_ops.BoostedTreesCreateEnsemble( tree_ensemble_handle=v.handle, stamp_token=[0], tree_ensemble_serialized=['0'])
The implementation for tf.raw_ops.BoostedTreesCreateEnsemble can result in a use after free error if an attacker supplies specially crafted arguments: import tensorflow as tf v= tf.Variable([0.0]) tf.raw_ops.BoostedTreesCreateEnsemble( tree_ensemble_handle=v.handle, stamp_token=[0], tree_ensemble_serialized=['0'])
An issue was discovered in the actix-utils crate before 2.0.0 for Rust. The Cell implementation allows obtaining more than one mutable reference to the same data.
An issue was discovered in the actix-service crate before 1.0.6 for Rust. The Cell implementation allows obtaining more than one mutable reference to the same data.
When running shape functions, some functions (such as MutableHashTableShape) produce extra output information in the form of a ShapeAndType struct. The shapes embedded in this struct are owned by an inference context that is cleaned up almost immediately; if the upstream code attempts to access this shape information, it can trigger a segfault. ShapeRefiner is mitigating this for normal output shapes by cloning them (and thus putting the newly created …
When running shape functions, some functions (such as MutableHashTableShape) produce extra output information in the form of a ShapeAndType struct. The shapes embedded in this struct are owned by an inference context that is cleaned up almost immediately; if the upstream code attempts to access this shape information, it can trigger a segfault. ShapeRefiner is mitigating this for normal output shapes by cloning them (and thus putting the newly created …
When running shape functions, some functions (such as MutableHashTableShape) produce extra output information in the form of a ShapeAndType struct. The shapes embedded in this struct are owned by an inference context that is cleaned up almost immediately; if the upstream code attempts to access this shape information, it can trigger a segfault. ShapeRefiner is mitigating this for normal output shapes by cloning them (and thus putting the newly created …
An issue was discovered in the bitvec crate before 0.17.4 for Rust. BitVec to BitBox conversion leads to a use-after-free or double free.
An issue was discovered in the abi_stable crate before 0.9.1 for Rust. A retain call can create an invalid UTF-8 string, violating soundness.
An issue was discovered in the abi_stable crate before 0.9.1 for Rust. DrainFilter lacks soundness because of a double drop.
An issue was discovered in the bigint crate through 2020-05-07 for Rust. It allows a soundness violation.
The implementation of impl Follow for bool allows to reinterpret arbitrary bytes as a bool. In Rust bool has stringent requirements for its in-memory representation. Use of this function allows to violate these requirements and invoke undefined behaviour in safe code.
An issue was discovered in the generator crate before 0.6.18 for Rust. Uninitialized memory is used by Scope, done, and yield_ during API calls.
Affected versions of Claxon made an invalid assumption about the decode buffer size being a multiple of a value read from the bitstream. This could cause parts of the decode buffer to not be overwritten. If the decode buffer was newly allocated and uninitialized, this uninitialized memory could be exposed. This allows an attacker to observe parts of the uninitialized memory in the decoded audio stream. The flaw was corrected …
An issue was discovered in the outer_cgi crate before 0.2.1 for Rust. A user-provided Read instance receives an uninitialized memory buffer from KeyValueReader.
Affected versions of this crate passes an uninitialized buffer to a user-provided Read implementation. (Record::read()). Arbitrary Read implementations can read from the uninitialized buffer (memory exposure) and also can return incorrect number of bytes written to the buffer. Reading from uninitialized memory produces undefined values that can quickly invoke undefined behavior. This flaw was fixed in commit 6299af0 by zero-initializing the newly allocated memory (via data.resize(len, 0)) instead of exposing …
Prior to 0.10.0 it was possible to have both decoding functions panic unexpectedly, by supplying tokens with an incorrect base62 encoding. The documentation stated that an error should have been reported instead.
In versions prior 0.11.3 it's possible to make from_slice panic by feeding it certain malformed input. It's never documented that from_slice (and from_bytes which wraps it) can panic, and its' return type (Result<Self, DecodeError>) suggests otherwise. In practice, from_slice/from_bytes is frequently used in networking code and is being called with unsanitized data from untrusted sources. This can allow attackers to cause DoS by causing an unexpected panic in the network …
bat on windows before 0.18.2 executes programs named less.exe from the current working directory. This can lead to unintended code execution.
An issue was discovered in the parse_duration crate through 2021-03-18 for Rust. It allows attackers to cause a denial of service (CPU and memory consumption) via a duration string with a large exponent.
Affected versions of this crate did not properly check for recursion while deserializing aliases. This allows an attacker to make a YAML file with an alias referring to itself causing an abort. The flaw was corrected by checking the recursion depth.
There's a stack overflow leading to a crash when Trust-DNS's parses a malicious DNS packet. Affected versions of this crate did not properly handle parsing of DNS message compression (RFC1035 section 4.1.4). The parser could be tricked into infinite loop when a compression offset pointed back to the same domain name to be parsed. This allows an attacker to craft a malicious DNS packet which when consumed with Trust-DNS could …
Affected versions of this crate did not prevent deep recursion while deserializing data structures. This allows an attacker to make a YAML file with deeply nested structures that causes an abort while deserializing it. The flaw was corrected by checking the recursion depth.
An issue was discovered in the ammonia crate before 2.1.0 for Rust. There is uncontrolled recursion during HTML DOM tree serialization.
Affected versions of this crate called Vec::reserve() on user-supplied input. This allows an attacker to cause an Out of Memory condition while calling the vulnerable method on untrusted data.
Affected versions of this crate pre-allocate memory on deserializing raw buffers without checking whether there is sufficient data available. This allows an attacker to do denial-of-service attacks by sending small msgpack messages that allocate gigabytes of memory.
An issue was discovered in the xcb crate through 2021-02-04 for Rust. It has a soundness violation because transmutation to the wrong type can happen after xcb::base::cast_event uses std::mem::transmute to return a reference to an arbitrary type.
An issue was discovered in the xcb crate through 2021-02-04 for Rust. It has a soundness violation because xcb::xproto::GetAtomNameReply::name() calls std::str::from_utf8_unchecked() on unvalidated bytes from an X server.
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the InlineArray implementation, an unaligned reference may be generated for a type that has a large alignment requirement.
Obstack generates unaligned references for types that require a large alignment.
An issue was discovered in the chunky crate through 2020-08-25 for Rust. The Chunk API does not honor an alignment requirement.
Affected versions of this crate violated alignment when casting byte slices to integer slices, resulting in undefined behavior. rand_core::BlockRng::next_u64 and rand_core::BlockRng::fill_bytes are affected.
Affected versions of this crate unconditionally implement Send/Sync for SyncChannel<T>. SyncChannel<T> doesn't provide access to &T but merely serves as a channel that consumes and returns owned T. Users can create UB in safe Rust by sending T: !Send to other threads with SyncChannel::send/recv APIs. Using T = Arc<Cell<_> allows to create data races (which can lead to memory corruption), and using T = MutexGuard<T> allows to unlock a mutex …
There's a stack overflow leading to a crash and potential DOS when processing additional records for return of MX or SRV record types from the server. This is only possible when a zone is configured with a null target for MX or SRV records. Prior to 0.16.0 the additional record processing was not supported by trust-dns-server. There Are no known issues with upgrading from 0.16 or 0.17 to 0.18.1. The …
An issue was discovered in the portaudio crate through 0.7.0 for Rust. There is a man-in-the-middle issue because the source code is downloaded over cleartext HTTP.
VendorInfo::as_string(), SoCVendorBrand::as_string(), and ExtendedFunctionInfo::processor_brand_string() construct byte slices using std::slice::from_raw_parts(), with data coming from #[repr(Rust)] structs. This is always undefined behavior. This flaw has been fixed in v9.0.0, by making the relevant structs #[repr(C)].
The socket2 crate has assumed std::net::SocketAddrV4 and std::net::SocketAddrV6 have the same memory layout as the system C representation sockaddr. It has simply casted the pointers to convert the socket addresses to the system representation. The standard library does not say anything about the memory layout, and this will cause invalid memory access if the standard library changes the implementation. No warnings or errors will be emitted once the change happens.
The socket2 crate has assumed std::net::SocketAddrV4 and std::net::SocketAddrV6 have the same memory layout as the system C representation sockaddr. It has simply casted the pointers to convert the socket addresses to the system representation. The standard library does not say anything about the memory layout, and this will cause invalid memory access if the standard library changes the implementation. No warnings or errors will be emitted once the change happens.
Affected versions of this crate called mem::uninitialized() to create values of a user-supplied type T. This is unsound e.g. if T is a reference type (which must be non-null and thus may not remain uninitialized). The flaw was corrected by avoiding the use of mem::uninitialized(), using MaybeUninit instead.
Slock<T> unconditionally implements Send/Sync. Affected versions of this crate allows sending non-Send types to other threads, which can lead to data races and memory corruption due to the data race.
Singleton<T> is meant to be a static object that can be initialized lazily. In order to satisfy the requirement that static items must implement Sync, Singleton implemented both Sync and Send unconditionally. This allows for a bug where non-Sync types such as Cell can be used in singletons and cause data races in concurrent programs. The flaw was corrected in commit b0d2bd20e by adding trait bounds, requiring the contaiend type …
Affected versions of this crate unconditionally implement Send/Sync for RcuCell<T>. This allows users to send T: !Send to other threads (while T enclosed within RcuCell<T>), and allows users to concurrently access T: !Sync by using the APIs of RcuCell<T> that provide access to &T. This can result in memory corruption caused by data races.
Unix-like operating systems may segfault due to dereferencing a dangling pointer in specific circumstances. This requires an environment variable to be set in a different thread than the affected functions. This may occur without the user's knowledge, notably in a third-party library. The affected functions from time 0.2.7 through 0.2.22 are: time::UtcOffset::local_offset_at time::UtcOffset::try_local_offset_at time::UtcOffset::current_local_offset time::UtcOffset::try_current_local_offset time::OffsetDateTime::now_local time::OffsetDateTime::try_now_local The affected functions in time 0.1 (all versions) are: at at_utc now Non-Unix …
Under certain conditions, Go code can trigger a segfault in string deallocation. For string tensors, C.TF_TString_Dealloc is called during garbage collection within a finalizer function. However, tensor structure isn't checked until encoding to avoid a performance penalty. The current method for dealloc assumes that encoding succeeded, but segfaults when a string tensor is garbage collected whose encoding failed (e.g., due to mismatched dimensions). To fix this, the call to set …
Under certain conditions, Go code can trigger a segfault in string deallocation. For string tensors, C.TF_TString_Dealloc is called during garbage collection within a finalizer function. However, tensor structure isn't checked until encoding to avoid a performance penalty. The current method for dealloc assumes that encoding succeeded, but segfaults when a string tensor is garbage collected whose encoding failed (e.g., due to mismatched dimensions). To fix this, the call to set …
Under certain conditions, Go code can trigger a segfault in string deallocation. For string tensors, C.TF_TString_Dealloc is called during garbage collection within a finalizer function. However, tensor structure isn't checked until encoding to avoid a performance penalty. The current method for dealloc assumes that encoding succeeded, but segfaults when a string tensor is garbage collected whose encoding failed (e.g., due to mismatched dimensions). To fix this, the call to set …
The implementation for tf.raw_ops.ExperimentalDatasetToTFRecord and tf.raw_ops.DatasetToTFRecord can trigger heap buffer overflow and segmentation fault: import tensorflow as tf dataset = tf.data.Dataset.range(3) dataset = tf.data.experimental.to_variant(dataset) tf.raw_ops.ExperimentalDatasetToTFRecord( input_dataset=dataset, filename='/tmp/output', compression_type='')
The implementation for tf.raw_ops.ExperimentalDatasetToTFRecord and tf.raw_ops.DatasetToTFRecord can trigger heap buffer overflow and segmentation fault: import tensorflow as tf dataset = tf.data.Dataset.range(3) dataset = tf.data.experimental.to_variant(dataset) tf.raw_ops.ExperimentalDatasetToTFRecord( input_dataset=dataset, filename='/tmp/output', compression_type='')
The implementation for tf.raw_ops.ExperimentalDatasetToTFRecord and tf.raw_ops.DatasetToTFRecord can trigger heap buffer overflow and segmentation fault: import tensorflow as tf dataset = tf.data.Dataset.range(3) dataset = tf.data.experimental.to_variant(dataset) tf.raw_ops.ExperimentalDatasetToTFRecord( input_dataset=dataset, filename='/tmp/output', compression_type='')
The scalarmult() function included in previous versions of this crate accepted all-zero public keys, for which the resulting Diffie-Hellman shared secret will always be zero regardless of the private key used. This issue was fixed by checking for this class of keys and rejecting them if they are used.
SafeCurl before 0.9.2 has a DNS rebinding vulnerability.
Safe Rust code can implement malfunctioning private_get_type_id and cause type confusion when downcasting, which is an undefined behavior. Users who derive Fail trait are not affected.
In the ckb sync protocol, SyncState maintains a HashMap called 'misbehavior' that keeps a score of a peer's violations of the protocol. This HashMap is keyed to PeerIndex (an alias for SessionId), and entries are never removed from it. SessionId is an integer that increases monotonically with every new connection. A remote attacker can manipulate this HashMap to grow forever, resulting in degraded performance and ultimately a panic on allocation …
An issue was discovered in the better-macro crate through 2021-07-22 for Rust. It intentionally demonstrates that remote attackers can execute arbitrary code via proc-macros, and otherwise has no legitimate purpose.
git-delta before 0.8.3 on Windows resolves an executable's pathname as a relative path from the current directory.
An issue was discovered in the pyo3 crate before 0.12.4 for Rust. There is a reference-counting error and use-after-free in From<Py>.
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.UnicodeEncode: import tensorflow as tf from tensorflow.python.ops import gen_string_ops gen_string_ops.unicode_encode( input_values=[], input_splits=[], output_encoding='UTF-8', errors='ignore', replacement_char='a')
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.UnicodeEncode: import tensorflow as tf from tensorflow.python.ops import gen_string_ops gen_string_ops.unicode_encode( input_values=[], input_splits=[], output_encoding='UTF-8', errors='ignore', replacement_char='a')
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.UnicodeEncode: import tensorflow as tf from tensorflow.python.ops import gen_string_ops gen_string_ops.unicode_encode( input_values=[], input_splits=[], output_encoding='UTF-8', errors='ignore', replacement_char='a')
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.SparseFillEmptyRows( indices = tf.constant([], shape=[0, 0], dtype=tf.int64), values = tf.constant([], shape=[0], dtype=tf.int64), dense_shape = tf.constant([], shape=[0], dtype=tf.int64), default_value = 0)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.SparseFillEmptyRows( indices = tf.constant([], shape=[0, 0], dtype=tf.int64), values = tf.constant([], shape=[0], dtype=tf.int64), dense_shape = tf.constant([], shape=[0], dtype=tf.int64), default_value = 0)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.SparseFillEmptyRows( indices = tf.constant([], shape=[0, 0], dtype=tf.int64), values = tf.constant([], shape=[0], dtype=tf.int64), dense_shape = tf.constant([], shape=[0], dtype=tf.int64), default_value = 0)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.Map* and tf.raw_ops.OrderedMap* operations: import tensorflow as tf tf.raw_ops.MapPeek( key=tf.constant([8],dtype=tf.int64), indices=[], dtypes=[tf.int32], capacity=8, memory_limit=128)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.Map* and tf.raw_ops.OrderedMap* operations: import tensorflow as tf tf.raw_ops.MapPeek( key=tf.constant([8],dtype=tf.int64), indices=[], dtypes=[tf.int32], capacity=8, memory_limit=128)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.Map* and tf.raw_ops.OrderedMap* operations: import tensorflow as tf tf.raw_ops.MapPeek( key=tf.constant([8],dtype=tf.int64), indices=[], dtypes=[tf.int32], capacity=8, memory_limit=128)
An attacker can generate undefined behavior via a reference binding to nullptr in BoostedTreesCalculateBestGainsPerFeature: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature( node_id_range=[], stats_summary_list=[[1,2,3]], l1=[1.0], l2=[1.0], tree_complexity =[1.0], min_node_weight =[1.17], max_splits=5) A similar attack can occur in BoostedTreesCalculateBestFeatureSplitV2: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2( node_id_range=[], stats_summaries_list=[[1,2,3]], split_types=[''], candidate_feature_ids=[1,2,3,4], l1=[1], l2=[1], tree_complexity=[1.0], min_node_weight=[1.17], logits_dimension=5)
An attacker can generate undefined behavior via a reference binding to nullptr in BoostedTreesCalculateBestGainsPerFeature: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature( node_id_range=[], stats_summary_list=[[1,2,3]], l1=[1.0], l2=[1.0], tree_complexity =[1.0], min_node_weight =[1.17], max_splits=5) A similar attack can occur in BoostedTreesCalculateBestFeatureSplitV2: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2( node_id_range=[], stats_summaries_list=[[1,2,3]], split_types=[''], candidate_feature_ids=[1,2,3,4], l1=[1], l2=[1], tree_complexity=[1.0], min_node_weight=[1.17], logits_dimension=5)
An attacker can generate undefined behavior via a reference binding to nullptr in BoostedTreesCalculateBestGainsPerFeature: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature( node_id_range=[], stats_summary_list=[[1,2,3]], l1=[1.0], l2=[1.0], tree_complexity =[1.0], min_node_weight =[1.17], max_splits=5) A similar attack can occur in BoostedTreesCalculateBestFeatureSplitV2: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2( node_id_range=[], stats_summaries_list=[[1,2,3]], split_types=[''], candidate_feature_ids=[1,2,3,4], l1=[1], l2=[1], tree_complexity=[1.0], min_node_weight=[1.17], logits_dimension=5)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.RaggedTensorToVariant: import tensorflow as tf tf.raw_ops.RaggedTensorToVariant( rt_nested_splits=[], rt_dense_values=[1,2,3], batched_input=True)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.RaggedTensorToVariant: import tensorflow as tf tf.raw_ops.RaggedTensorToVariant( rt_nested_splits=[], rt_dense_values=[1,2,3], batched_input=True)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.RaggedTensorToVariant: import tensorflow as tf tf.raw_ops.RaggedTensorToVariant( rt_nested_splits=[], rt_dense_values=[1,2,3], batched_input=True)
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.RaggedTensorToSparse: import tensorflow as tf tf.raw_ops.RaggedTensorToSparse( rt_nested_splits=[[0, 38, 0]], rt_dense_values=[])
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.RaggedTensorToSparse: import tensorflow as tf tf.raw_ops.RaggedTensorToSparse( rt_nested_splits=[[0, 38, 0]], rt_dense_values=[])
An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.RaggedTensorToSparse: import tensorflow as tf tf.raw_ops.RaggedTensorToSparse( rt_nested_splits=[[0, 38, 0]], rt_dense_values=[])
An attacker can cause undefined behavior via binding a reference to null pointer in all operations of type tf.raw_ops.MatrixSetDiagV*: import tensorflow as tf tf.raw_ops.MatrixSetDiagV3( input=[1,2,3], diagonal=[1,1], k=[], align='RIGHT_LEFT')
An attacker can cause undefined behavior via binding a reference to null pointer in all operations of type tf.raw_ops.MatrixSetDiagV*: import tensorflow as tf tf.raw_ops.MatrixSetDiagV3( input=[1,2,3], diagonal=[1,1], k=[], align='RIGHT_LEFT')
An attacker can cause undefined behavior via binding a reference to null pointer in all operations of type tf.raw_ops.MatrixSetDiagV*: import tensorflow as tf tf.raw_ops.MatrixSetDiagV3( input=[1,2,3], diagonal=[1,1], k=[], align='RIGHT_LEFT')
An attacker can cause undefined behavior via binding a reference to null pointer in all operations of type tf.raw_ops.MatrixDiagV*: import tensorflow as tf tf.raw_ops.MatrixDiagV3( diagonal=[1,0], k=[], num_rows=[1,2,3], num_cols=[4,5], padding_value=[], align='RIGHT_RIGHT')
An attacker can cause undefined behavior via binding a reference to null pointer in all operations of type tf.raw_ops.MatrixDiagV*: import tensorflow as tf tf.raw_ops.MatrixDiagV3( diagonal=[1,0], k=[], num_rows=[1,2,3], num_cols=[4,5], padding_value=[], align='RIGHT_RIGHT')
An attacker can cause undefined behavior via binding a reference to null pointer in all operations of type tf.raw_ops.MatrixDiagV*: import tensorflow as tf tf.raw_ops.MatrixDiagV3( diagonal=[1,0], k=[], num_rows=[1,2,3], num_cols=[4,5], padding_value=[], align='RIGHT_RIGHT')
An attacker can cause undefined behavior via binding a reference to null pointer in all binary cwise operations that don't require broadcasting (e.g., gradients of binary cwise operations): import tensorflow as tf tf.raw_ops.SqrtGrad(y=[4, 16],dy=[])
An attacker can cause undefined behavior via binding a reference to null pointer in all binary cwise operations that don't require broadcasting (e.g., gradients of binary cwise operations): import tensorflow as tf tf.raw_ops.SqrtGrad(y=[4, 16],dy=[])
An attacker can cause undefined behavior via binding a reference to null pointer in all binary cwise operations that don't require broadcasting (e.g., gradients of binary cwise operations): import tensorflow as tf tf.raw_ops.SqrtGrad(y=[4, 16],dy=[])
Affected versions of this crate passes an uninitialized buffer to a user-provided Read implementation. Arbitrary Read implementations can read from the uninitialized buffer (memory exposure) and also can return incorrect number of bytes written to the buffer. Reading from uninitialized memory produces undefined values that can quickly invoke undefined behavior. This flaw was fixed in commit 8026286 by zero-initializing the buffer before handing to a user-provided Read.
An issue was discovered in Deserializer::read_vec in the cdr crate before 0.2.4 for Rust. A user-provided Read implementation can gain access to the old contents of newly allocated heap memory, violating soundness.
When aborting a task with JoinHandle::abort, the future is dropped in the thread calling abort if the task is not currently being executed. This is incorrect for tasks spawned on a LocalSet. This can easily result in race conditions as many projects use Rc or RefCell in their Tokio tasks for better performance.
In the affected versions of this crate, LockWeak<T> unconditionally implemented Send with no trait bounds on T. LockWeak<T> doesn't own T and only provides &T. This allows concurrent access to a non-Sync T, which can cause undefined behavior like data races.
The quinn crate has assumed std::net::SocketAddrV4 and std::net::SocketAddrV6 have the same memory layout as the system C representation sockaddr. It has simply casted the pointers to convert the socket addresses to the system representation. The standard library does not say anything about the memory layout, and this will cause invalid memory access if the standard library changes the implementation. No warnings or errors will be emitted once the change happens.
Affected versions of this crate unconditionally implements Send/Sync for Queue<T>. This allows (1) creating data races to a T: !Sync and (2) sending T: !Send to other threads, resulting in memory corruption or other undefined behavior.
It's easy to create a malign transaction which uses the dead cell as the DepGroup in the DepCells. The transaction can crash all the receiving nodes.
The attribute repr() added to enums to be compatible with C-FFI caused memory corruption on MSVC toolchain. arrayfire crates <= version 3.5.0 do not have this issue when used with Rust versions 1.27 or earlier. The issue only started to appear since Rust version 1.28. The issue seems to be interlinked with which version of Rust is being used. The issue was fixed in crate 3.6.0.
The attribute repr() added to enums to be compatible with C-FFI caused memory corruption on MSVC toolchain. arrayfire crates <= version 3.5.0 do not have this issue when used with Rust versions 1.27 or earlier. The issue only started to appear since Rust version 1.28. The issue seems to be interlinked with which version of Rust is being used. The issue was fixed in crate 3.6.0.
pleaseedit in pleaser before 0.4.0 uses predictable temporary filenames in /tmp and the target directory. This allows a local attacker to gain full root privileges by staging a symlink attack.
Failure to normalize the umask in pleaser before 0.4.0 allows a local attacker to gain full root privileges if they are allowed to execute at least one command.
An issue was discovered in the mozwire crate through 2020-08-18 for Rust. A ../ directory-traversal situation allows overwriting local files that have .conf at the end of the filename.
Anyone who uses total_size(..) function to partial read the length of any FixVec will get an incorrect result, due to an incorrect implementation. This has been resolved in the 0.7.2 release.
Affected versions of this crate contained a bug in which untrusted input could cause an overflow and panic when converting a Timestamp to SystemTime. It is recommended to upgrade to prost-types v0.8 and switch the usage of From for SystemTime to TryFrom for SystemTime.
An issue was discovered in the libsecp256k1 crate before 0.5.0 for Rust. It can verify an invalid signature because it allows the R or S parameter to be larger than the curve order, aka an overflow.
ArrayVec::insert allows insertion of an element into the array object into the specified index. Due to a missing check on the upperbound of this index, it is possible to write out of bounds.
TensorFlow is an end-to-end open source platform for machine learning.Hence, an attacker can read arbitrary data from the heap by carefully crafting a model with negative values in indices. Similar issue exists in Gather implementation We have patched the issue in GitHub commits bb6a0383ed553c286f87ca88c207f6774d5c4a8f and eb921122119a6b6e470ee98b89e65d721663179d. The fix will be included in TensorFlow We will also cherrypick this commit on TensorFlow, TensorFlow, and TensorFlow, as these are also affected and …
TensorFlow is an end-to-end open source platform for machine learning.If axis is a large negative value (e.g., -100000), then after the first if it would still be negative. The check following the if statement will pass and the for loop would read one element before the start of input_dims.data (when i = 0). We have patched the issue in GitHub commit d94ffe08a65400f898241c0374e9edc6fa8ed257. The fix will be included in TensorFlow We …
There's a flaw in OpenEXR's rleUncompress functionality. An attacker who is able to submit a crafted file to an application linked with OpenEXR could cause an out-of-bounds read. The greatest risk from this flaw is to application availability.
An issue was discovered in the traitobject crate through 2020-06-01 for Rust. It has false expectations about fat pointers, possibly causing memory corruption in, for example, Rust 2.x.
StackVec::extend used the lower and upper bounds from an Iterator's size_hint to determine how many items to push into the stack based vector. If the size_hint implementation returned a lower bound that was larger than the upper bound, StackVec would write out of bounds and overwrite memory on the stack. As mentioned by the size_hint documentation, size_hint is mainly for optimization and incorrect implementations should not lead to memory safety …
Affected versions of this crate entered a corrupted state if mem::size_of::() % allocation_granularity() != 0 and a specific allocation pattern was used: sufficiently shifting the deque elements over the mirrored page boundary. This allows an attacker that controls controls both element insertion and removal to corrupt the deque, such that reading elements from it would read bytes corresponding to other elements in the deque. (e.g. a read of T could …
Affected versions of this crate did not properly check if semantic tags were nested excessively during deserialization. This allows an attacker to craft small (< 1 kB) CBOR documents that cause a stack overflow. The flaw was corrected by limiting the allowed number of nested tags.
swap_index takes an iterator and swaps the items with their corresponding indexes. It reserves capacity and sets the length of the vector based on the .len() method of the iterator. If the len() returned by the iterator is larger than the actual number of elements yielded, then swap_index creates a vector containing uninitialized members. If the len() returned by the iterator is smaller than the actual number of members yielded, …
swap_index takes an iterator and swaps the items with their corresponding indexes. It reserves capacity and sets the length of the vector based on the .len() method of the iterator. If the len() returned by the iterator is larger than the actual number of elements yielded, then swap_index creates a vector containing uninitialized members. If the len() returned by the iterator is smaller than the actual number of members yielded, …
Affected versions of this crate contained a bug in which decoding untrusted input could overflow the stack. On architectures with stack probes (like x86), this can be used for denial of service attacks, while on architectures without stack probes (like ARM) overflowing the stack is unsound and can result in potential memory corruption (or even RCE).
The Deserialize implementation for VecStorage did not maintain the invariant that the number of elements must equal nrows * ncols. Deserialization of specially crafted inputs could allow memory access beyond allocation of the vector. This flaw was introduced in v0.11.0 (086e6e) due to the addition of an automatically derived implementation of Deserialize for MatrixVec. MatrixVec was later renamed to VecStorage in v0.16.13 (0f66403) and continued to use the automatically derived …
An issue was discovered in the calamine crate before 0.17.0 for Rust. It allows attackers to overwrite heap-memory locations because Vec::set_len is used without proper memory claiming, and this uninitialized memory is used for a user-provided Read operation, as demonstrated by Sectors::get.
An issue was discovered in the arenavec crate through 0.1.1. A drop of uninitialized memory can sometimes occur upon a panic in T::default()
An issue was discovered in the xcb crate through 2021-02-04 for Rust. It has a soundness violation because there is an out-of-bounds read in xcb::xproto::change_property(), as demonstrated by a format=32 T=u8 situation where out-of-bounds bytes are sent to an X server.
An issue was discovered in PartialReader in the uu_od crate before 0.0.4 for Rust. Attackers can read the contents of uninitialized memory locations via a user-provided Read operation.
An issue was discovered in the simple-slab crate before 0.3.3 for Rust. index() allows an out-of-bounds read.
The affected version of this crate did not guard against accessing memory beyond the range of its input data. A pointer cast to read the data into a 256-bit register could lead to a segmentation fault when the end plus the 32 bytes (256 bit) read would overlap into the next page during string parsing. This allows an attacker to eventually crash a service. The flaw was corrected by using …
An issue was discovered in the ozone crate through version 0.1.0 for Rust. Memory safety is violated because of out-of-bounds access.
An issue was discovered in the ordnung crate through version 0.0.1 for Rust. compact::Vec violates memory safety via out-of-bounds access for large capacity.
An issue was discovered in the lazy-init crate through 2021-01-17 for Rust. Lazy lacks a Send bound, leading to a data race.
An issue was discovered in the fltk crate before 0.15.3 for Rust. There is an out-of bounds read because the pixmap constructor lacks pixmap input validation.
VecCopy::data is created as a Vec of u8 but can be used to store and retrieve elements of different types leading to misaligned access. The issue was resolved in v0.5.0 by replacing data being stored by Vec with a custom managed pointer. Elements are now stored and retrieved using types with proper alignment corresponding to original types.
An issue was discovered in the bumpalo crate before 3.2.1 for Rust. The realloc feature allows the reading of unknown memory. Attackers can potentially read cryptographic keys.
Buffered Random Access (BRA) provides easy random memory access to a sequential source of data in Rust. This is achieved by greedily retaining all memory read from a given source. Buffered Random Access (BRA) provides easy random memory access to a sequential source of data in Rust. An issue was discovered in the bra crate before 0.1.1 for Rust. It lacks soundness because it can read uninitialized memory.
Affected versions of rgb crate allow viewing and modifying data of any type T wrapped in RGB as bytes, and do not correctly constrain RGB and other wrapper structures to the types for which it is safe to do so. Safety violation possible for a type wrapped in RGB and similar wrapper structures: If T contains padding, viewing it as bytes may lead to exposure of contents of uninitialized memory. …
An embedding using affected versions of lucet-runtime configured to use non-default Wasm globals sizes of more than 4KiB, or compiled in debug mode without optimizations, could leak data from the signal handler stack to guest programs. This can potentially cause data from the embedding host to leak to guest programs or cause corruption of guest program memory. This flaw was resolved by correcting the sigstack allocation logic.
Affected versions of this crate did not properly implement the generativity, because the invariant lifetimes were not necessarily dropped. This allows an attacker to mix up two arenas, using indices created from one arena with another one. This might lead to an out-of-bounds read or write access into the memory reserved for the arena. The flaw was corrected by implementing generativity correctly in version 0.4.0.
The Windows implementation of this crate relied on the behavior of std::char::from_u32_unchecked when its safety clause is violated. Even though this worked with Rust versions up to 1.42 (at least), that behavior could change with any new Rust version, possibly leading a security issue. The flaw was corrected in version 2.0.0.
After using an assignment operators such as NotNan::add_assign, NotNan::mul_assign, etc., it was possible for the resulting NotNan value to contain a NaN. This could cause undefined behavior in safe code, because the safe NotNan::cmp method contains internal unsafe code that assumes the value is never NaN. (It could also cause undefined behavior in third-party unsafe code that makes the same assumption, as well as logic errors in safe code.) This …
An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is …
An issue was discovered in the simple-slab crate before 0.3.3 for Rust. remove() has an off-by-one error, causing memory leakage and a drop of uninitialized memory.
A timing vulnerability in the Scalar::check_overflow function in Parity libsecp256k1-rs before 0.3.1 potentially allows an attacker to leak information via a side-channel attack.
An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service:
An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service:
An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service:
An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service: import tensorflow as tf model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(1, 2, 3))) model.add(tf.keras.layers.Dense(0, activation='relu')) converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() interpreter.invoke()
An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service: import tensorflow as tf model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(1, 2, 3))) model.add(tf.keras.layers.Dense(0, activation='relu')) converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() interpreter.invoke()
An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service: import tensorflow as tf model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(1, 2, 3))) model.add(tf.keras.layers.Dense(0, activation='relu')) converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() interpreter.invoke()
An issue was discovered in the cbox crate through 2020-03-19 for Rust. The CBox API allows dereferencing raw pointers without a requirement for unsafe code.
The code for tf.raw_ops.UncompressElement can be made to trigger a null pointer dereference: import tensorflow as tf data = tf.data.Dataset.from_tensors([0.0]) tf.raw_ops.UncompressElement( compressed=tf.data.experimental.to_variant(data), output_types=[tf.int64], output_shapes=[2])
The code for tf.raw_ops.UncompressElement can be made to trigger a null pointer dereference: import tensorflow as tf data = tf.data.Dataset.from_tensors([0.0]) tf.raw_ops.UncompressElement( compressed=tf.data.experimental.to_variant(data), output_types=[tf.int64], output_shapes=[2])
The code for tf.raw_ops.UncompressElement can be made to trigger a null pointer dereference: import tensorflow as tf data = tf.data.Dataset.from_tensors([0.0]) tf.raw_ops.UncompressElement( compressed=tf.data.experimental.to_variant(data), output_types=[tf.int64], output_shapes=[2])
When a user does not supply arguments that determine a valid sparse tensor, tf.raw_ops.SparseTensorSliceDataset implementation can be made to dereference a null pointer: import tensorflow as tf tf.raw_ops.SparseTensorSliceDataset( indices=[[],[],[]], values=[1,2,3], dense_shape=[3,3])
When a user does not supply arguments that determine a valid sparse tensor, tf.raw_ops.SparseTensorSliceDataset implementation can be made to dereference a null pointer: import tensorflow as tf tf.raw_ops.SparseTensorSliceDataset( indices=[[],[],[]], values=[1,2,3], dense_shape=[3,3])
When a user does not supply arguments that determine a valid sparse tensor, tf.raw_ops.SparseTensorSliceDataset implementation can be made to dereference a null pointer: import tensorflow as tf tf.raw_ops.SparseTensorSliceDataset( indices=[[],[],[]], values=[1,2,3], dense_shape=[3,3])
Sending invalid argument for row_partition_types of tf.raw_ops.RaggedTensorToTensor API results in a null pointer dereference and undefined behavior: import tensorflow as tf tf.raw_ops.RaggedTensorToTensor( shape=1, values=10, default_value=21, row_partition_tensors=tf.constant([0,0,0,0]), row_partition_types=[])
Sending invalid argument for row_partition_types of tf.raw_ops.RaggedTensorToTensor API results in a null pointer dereference and undefined behavior: import tensorflow as tf tf.raw_ops.RaggedTensorToTensor( shape=1, values=10, default_value=21, row_partition_tensors=tf.constant([0,0,0,0]), row_partition_types=[])
Sending invalid argument for row_partition_types of tf.raw_ops.RaggedTensorToTensor API results in a null pointer dereference and undefined behavior: import tensorflow as tf tf.raw_ops.RaggedTensorToTensor( shape=1, values=10, default_value=21, row_partition_tensors=tf.constant([0,0,0,0]), row_partition_types=[])
If a user does not provide a valid padding value to tf.raw_ops.MatrixDiagPartOp, then the code triggers a null pointer dereference (if input is empty) or produces invalid behavior, ignoring all values after the first: import tensorflow as tf tf.raw_ops.MatrixDiagPartV2( input=tf.ones(2,dtype=tf.int32), k=tf.ones(2,dtype=tf.int32), padding_value=[]) Although this example is given for MatrixDiagPartV2, all versions of the operation are affected.
If a user does not provide a valid padding value to tf.raw_ops.MatrixDiagPartOp, then the code triggers a null pointer dereference (if input is empty) or produces invalid behavior, ignoring all values after the first: import tensorflow as tf tf.raw_ops.MatrixDiagPartV2( input=tf.ones(2,dtype=tf.int32), k=tf.ones(2,dtype=tf.int32), padding_value=[]) Although this example is given for MatrixDiagPartV2, all versions of the operation are affected.
If a user does not provide a valid padding value to tf.raw_ops.MatrixDiagPartOp, then the code triggers a null pointer dereference (if input is empty) or produces invalid behavior, ignoring all values after the first: import tensorflow as tf tf.raw_ops.MatrixDiagPartV2( input=tf.ones(2,dtype=tf.int32), k=tf.ones(2,dtype=tf.int32), padding_value=[]) Although this example is given for MatrixDiagPartV2, all versions of the operation are affected.
It is possible to trigger a null pointer dereference in TensorFlow by passing an invalid input to tf.raw_ops.CompressElement: import tensorflow as tf tf.raw_ops.CompressElement(components=[[]])
It is possible to trigger a null pointer dereference in TensorFlow by passing an invalid input to tf.raw_ops.CompressElement: import tensorflow as tf tf.raw_ops.CompressElement(components=[[]])
It is possible to trigger a null pointer dereference in TensorFlow by passing an invalid input to tf.raw_ops.CompressElement: import tensorflow as tf tf.raw_ops.CompressElement(components=[[]])
When restoring tensors via raw APIs, if the tensor name is not provided, TensorFlow can be tricked into dereferencing a null pointer: import tensorflow as tf tf.raw_ops.Restore( file_pattern=['/tmp'], tensor_name=[], default_value=21, dt=tf.int, preferred_shard=1) The same undefined behavior can be triggered by tf.raw_ops.RestoreSlice: import tensorflow as tf tf.raw_ops.RestoreSlice( file_pattern=['/tmp'], tensor_name=[], shape_and_slice='2', dt=inp.array([tf.int]), preferred_shard=1) Alternatively, attackers can read memory outside the bounds of heap allocated data by providing some tensor names but not …
When restoring tensors via raw APIs, if the tensor name is not provided, TensorFlow can be tricked into dereferencing a null pointer: import tensorflow as tf tf.raw_ops.Restore( file_pattern=['/tmp'], tensor_name=[], default_value=21, dt=tf.int, preferred_shard=1) The same undefined behavior can be triggered by tf.raw_ops.RestoreSlice: import tensorflow as tf tf.raw_ops.RestoreSlice( file_pattern=['/tmp'], tensor_name=[], shape_and_slice='2', dt=inp.array([tf.int]), preferred_shard=1) Alternatively, attackers can read memory outside the bounds of heap allocated data by providing some tensor names but not …
When restoring tensors via raw APIs, if the tensor name is not provided, TensorFlow can be tricked into dereferencing a null pointer: import tensorflow as tf tf.raw_ops.Restore( file_pattern=['/tmp'], tensor_name=[], default_value=21, dt=tf.int, preferred_shard=1) The same undefined behavior can be triggered by tf.raw_ops.RestoreSlice: import tensorflow as tf tf.raw_ops.RestoreSlice( file_pattern=['/tmp'], tensor_name=[], shape_and_slice='2', dt=inp.array([tf.int]), preferred_shard=1) Alternatively, attackers can read memory outside the bounds of heap allocated data by providing some tensor names but not …
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service. This is caused by the MLIR optimization of L2NormalizeReduceAxis operator. The implementation unconditionally dereferences a pointer to an iterator to a vector without checking that the vector has elements.
TensorFlow is an end-to-end open source platform for machine learning. which would result in a crash and denial of service. This is caused by the MLIR optimization of L2NormalizeReduceAxis operator. The implementation unconditionally dereferences a pointer to an iterator to a vector without checking that the vector has elements. We have patched the issue in GitHub commit d6b57f461b39fd1aa8c1b870f1b974aac3554955. The fix will be included in TensorFlow We will also cherrypick this …
TensorFlow is an end-to-end open source platform for machine learning. The GetVariableInput function can return a null pointer but GetTensorData assumes that the argument is always a valid tensor. Furthermore, because GetVariableInput calls GetMutableInput which might return nullptr, the tensor->is_variable expression can also trigger a null pointer exception. We have patched the issue in GitHub commit 5b048e87e4e55990dae6b547add4dae59f4e1c76. The fix will be included in TensorFlow We will also cherrypick this commit …
TensorFlow is an end-to-end open source platform for machine learning. which would result in a crash and denial of service. The implementation unconditionally dereferences a pointer. We have patched the issue in GitHub commit e456c7dc9bd6be203b09765b063bf4a380c. The fix will be included in TensorFlow We will also cherrypick this commit on TensorFlow, TensorFlow, and TensorFlow, as these are also affected and still in supported range.
Server or client applications that call the SSL_check_chain() function during or after a TLS 1.3 handshake may crash due to a NULL pointer dereference as a result of incorrect handling of the "signature_algorithms_cert" TLS extension. The crash occurs if an invalid or unrecognised signature algorithm is received from the peer. This could be exploited by a malicious peer in a Denial of Service attack. OpenSSL version 1.1.1d, 1.1.1e, and 1.1.1f …
An issue was discovered in the fltk crate before 0.15.3 for Rust. There is a NULL pointer dereference during attempted use of a multi label type if the image is nonexistent.
An issue was discovered in the fltk crate before 0.15.3 for Rust. There is a NULL pointer dereference during attempted use of a non-raster image for a window icon.
An issue was discovered in the cache crate through 2021-01-01 for Rust. A raw pointer is dereferenced.
An issue was discovered in the av-data crate before 0.3.0 for Rust. A raw pointer is dereferenced, leading to a read of an arbitrary memory address, sometimes causing a segfault.
The implementation of SVDF in TFLite is vulnerable to a null pointer error: TfLiteTensor* state = GetVariableInput(context, node, kStateTensor); // … GetTensorData<float>(state) The GetVariableInput function can return a null pointer but GetTensorData assumes that the argument is always a valid tensor. TfLiteTensor* GetVariableInput(TfLiteContext* context, const TfLiteNode* node, int index) { TfLiteTensor* tensor = GetMutableInput(context, node, index); return tensor->is_variable ? tensor : nullptr; } Furthermore, because GetVariableInput calls GetMutableInput which might …
The implementation of SVDF in TFLite is vulnerable to a null pointer error: TfLiteTensor* state = GetVariableInput(context, node, kStateTensor); // … GetTensorData<float>(state) The GetVariableInput function can return a null pointer but GetTensorData assumes that the argument is always a valid tensor. TfLiteTensor* GetVariableInput(TfLiteContext* context, const TfLiteNode* node, int index) { TfLiteTensor* tensor = GetMutableInput(context, node, index); return tensor->is_variable ? tensor : nullptr; } Furthermore, because GetVariableInput calls GetMutableInput which might …
The implementation of SVDF in TFLite is vulnerable to a null pointer error: TfLiteTensor* state = GetVariableInput(context, node, kStateTensor); // … GetTensorData<float>(state) The GetVariableInput function can return a null pointer but GetTensorData assumes that the argument is always a valid tensor. TfLiteTensor* GetVariableInput(TfLiteContext* context, const TfLiteNode* node, int index) { TfLiteTensor* tensor = GetMutableInput(context, node, index); return tensor->is_variable ? tensor : nullptr; } Furthermore, because GetVariableInput calls GetMutableInput which might …
Multiple soundness issues in Ptr in cgc Affected versions of this crate have the following issues: Ptr implements Send and Sync for all types, this can lead to data races by sending non-thread safe types across threads. Ptr::get violates mutable alias rules by returning multiple mutable references to the same object. Ptr::write uses non-atomic writes to the underlying pointer. This means that when used across threads it can lead to …
The nb-connect crate has assumed std::net::SocketAddrV4 and std::net::SocketAddrV6 have the same memory layout as the system C representation sockaddr. It has simply casted the pointers to convert the socket addresses to the system representation. The standard library does not say anything about the memory layout, and this will cause invalid memory access if the standard library changes the implementation. No warnings or errors will be emitted once the change happens.
Affected versions of the noise_search crate unconditionally implement Send/Sync for MvccRwLock. This can lead to data races when types that are either !Send or !Sync (e.g. Rc<T>, Arc<Cell<_>>) are contained inside MvccRwLock and sent across thread boundaries. The data races can potentially lead to memory corruption (as demonstrated in the PoC from the original report issue). Also, safe APIs of MvccRwLock allow aliasing violations by allowing &T and LockResult<MutexGuard<Box<T>>> to …
A mutable reference to a struct was constructed by dereferencing a pointer obtained from slice::as_ptr. Instead, slice::as_mut_ptr should have been called on the mutable slice argument. The former performs an implicit reborrow as an immutable shared reference which does not allow writing through the derived pointer.
Affected versions of this crate have the following issues: Ptr implements Send and Sync for all types, this can lead to data races by sending non-thread safe types across threads. Ptr::get violates mutable alias rules by returning multiple mutable references to the same object. Ptr::write uses non-atomic writes to the underlying pointer. This means that when used across threads it can lead to data races.
arr crate contains multiple security issues. Specifically, It incorrectly implements Sync/Send bounds, which allows to smuggle non-Sync/Send types across the thread boundary. Index and IndexMut implementation does not check the array bound. Array::new_from_template() drops uninitialized memory.
arr crate contains multiple security issues. Specifically, It incorrectly implements Sync/Send bounds, which allows to smuggle non-Sync/Send types across the thread boundary. Index and IndexMut implementation does not check the array bound. Array::new_from_template() drops uninitialized memory.
arr crate contains multiple security issues. Specifically, It incorrectly implements Sync/Send bounds, which allows to smuggle non-Sync/Send types across the thread boundary. Index and IndexMut implementation does not check the array bound. Array::new_from_template() drops uninitialized memory.
Affected versions contain multiple memory safety issues, such as: Unsoundly coercing immutable references to mutable references Unsoundly extending lifetimes of strings Adding the Send marker trait to objects that cannot be safely sent between threads This may result in a variety of memory corruption scenarios, most likely use-after-free. A signficant refactoring effort has been conducted to resolve these issues.
The shape inference code for tf.raw_ops.Dequantize has a vulnerability that could trigger a denial of service via a segfault if an attacker provides invalid arguments: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Dequantize( input_tensor = tf.constant(-10.0, dtype=tf.float32), input_tensor = tf.cast(input_tensor, dtype=tf.quint8), min_range = tf.constant([], shape=[0], dtype=tf.float32), max_range = tf.constant([], shape=[0], dtype=tf.float32), mode = 'MIN_COMBINED', narrow_range=False, axis=-10, dtype=tf.dtypes.float32)
The shape inference code for tf.raw_ops.Dequantize has a vulnerability that could trigger a denial of service via a segfault if an attacker provides invalid arguments: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Dequantize( input_tensor = tf.constant(-10.0, dtype=tf.float32), input_tensor = tf.cast(input_tensor, dtype=tf.quint8), min_range = tf.constant([], shape=[0], dtype=tf.float32), max_range = tf.constant([], shape=[0], dtype=tf.float32), mode = 'MIN_COMBINED', narrow_range=False, axis=-10, dtype=tf.dtypes.float32)
The shape inference code for tf.raw_ops.Dequantize has a vulnerability that could trigger a denial of service via a segfault if an attacker provides invalid arguments: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Dequantize( input_tensor = tf.constant(-10.0, dtype=tf.float32), input_tensor = tf.cast(input_tensor, dtype=tf.quint8), min_range = tf.constant([], shape=[0], dtype=tf.float32), max_range = tf.constant([], shape=[0], dtype=tf.float32), mode = 'MIN_COMBINED', narrow_range=False, axis=-10, dtype=tf.dtypes.float32)
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, clone can have a memory-safety issue upon a panic.
Chunk: Array size is not checked when constructed with unit() and pair(). Array size is not checked when constructed with From<InlineArray<A, T>>. Clone and insert_from are not panic-safe; A panicking iterator causes memory safety issues with them. InlineArray: Generates unaligned references for types with a large alignment requirement.
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated because rusqlite::trace::log mishandles format strings.
ncurses exposes functions from the ncurses library which: Pass buffers without length to C functions that may write an arbitrary amount of data, leading to a buffer overflow. (instr, mvwinstr, etc) Passes rust &str to strings expecting C format arguments, allowing hostile input to execute a format string attack, which trivially allows writing arbitrary data to stack memory (functions in the printw family).
The miow crate has assumed std::net::SocketAddrV4 and std::net::SocketAddrV6 have the same memory layout as the system C representation sockaddr. It has simply casted the pointers to convert the socket addresses to the system representation. The standard library does not say anything about the memory layout, and this will cause invalid memory access if the standard library changes the implementation. No warnings or errors will be emitted once the change happens.
The mio crate has assumed std::net::SocketAddrV4 and std::net::SocketAddrV6 have the same memory layout as the system C representation sockaddr. It has simply casted the pointers to convert the socket addresses to the system representation. The standard library does not say anything about the memory layout, and this will cause invalid memory access if the standard library changes the implementation. No warnings or errors will be emitted once the change happens.
The RPC get_block_template fails when a cell has been used as a cell dep and an input in the different transactions. Say cell C is used as a dep group in the transaction A, and is destroyed in the transaction B. The node adds transaction A first, then B into the transaction pool. They are both valid. But when generating the block template, if the fee rate of B is …
An issue was discovered in the crayon crate through 2020-08-31 for Rust. A TOCTOU issue has a resultant memory safety violation via HandleLike.
Prior to the patch, when executing specific EVM opcodes related to memory operations that use evm_core::Memory::copy_large, the crate can over-allocate memory when it is not needed, making it possible for an attacker to perform denial-of-service attack. The flaw was corrected in commit 19ade85.
An issue was discovered in the xcb crate through 2020-12-10 for Rust. base::Error does not have soundness. Because of the public ptr field, a use-after-free or double-free can occur.
An issue was discovered in the asn1_der crate before 0.6.2 for Rust. Attackers can trigger memory exhaustion by supplying a large value in a length field.
Affected versions of this crate did not properly update the head and tail of the deque when inserting and removing elements from the front if, before insertion or removal, the tail of the deque was in the mirrored memory region, and if, after insertion or removal, the head of the deque is exactly at the beginning of the mirrored memory region. An attacker that controls both element insertion and removal …
Attempting to call grow on a spilled SmallVec with a value less than the current capacity causes corruption of memory allocator data structures. An attacker that controls the value passed to grow may exploit this flaw to obtain memory contents or gain remote code execution.
An issue was discovered in the array-tools crate before 0.3.2 for Rust. Affected versions of this crate don't guard against panics, so that partially uninitialized buffer is dropped when user-provided T::clone() panics in FixedCapacityDequeLike<T, A>::clone(). This causes memory corruption.
There is a bug in 0.73.0 of the Cranelift x64 backend that can create a scenario that could result in a potential sandbox escape in a WebAssembly module. Users of versions 0.73.0 of Cranelift should upgrade to either 0.73.1 or 0.74 to remediate this vulnerability. Users of Cranelift prior to 0.73.0 should update to 0.73.1 or 0.74 if they were not using the old default backend.
There is a bug in 0.73.0 of the Cranelift x64 backend that can create a scenario that could result in a potential sandbox escape in a WebAssembly module. Users of versions 0.73.0 of Cranelift should upgrade to either 0.73.1 or 0.74 to remediate this vulnerability. Users of Cranelift prior to 0.73.0 should update to 0.73.1 or 0.74 if they were not using the old default backend.
When unpacking a tarball that contains a symlink the tar crate may create directories outside of the directory it's supposed to unpack into. The function errors when it's trying to create a file, but the folders are already created at this point.
Versions of libsecp256k1 prior to 0.3.1 did not execute Scalar::check_overflow in constant time. This allows an attacker to potentially leak information via a timing attack. The flaw was corrected by modifying Scalar::check_overflow to execute in constant time.
A mistake in error handling in untrusted before 0.6.2 could lead to an integer underflow and panic if a user of the crate didn't properly check for errors returned by untrusted. Combination of these two programming errors (one in untrusted and another by user of this crate) could lead to a panic and maybe a denial of service of affected software. The error in untrusted is fixed in release 0.6.2 …
HeaderMap::reserve() used usize::next_power_of_two() to calculate the increased capacity. However, next_power_of_two() silently overflows to 0 if given a sufficiently large number in release mode. If the map was not empty when the overflow happens, the library will invoke self.grow(0) and start infinite probing. This allows an attacker who controls the argument to reserve() to cause a potential denial of service (DoS). The flaw was corrected in 0.1.20 release of http crate.
Calls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow the output length argument in some cases where the input length is close to the maximum permissable length for an integer on the platform. In such cases the return value from the function call will be 1 (indicating success), but the output length value will be negative. This could cause applications to behave incorrectly or crash. OpenSSL versions 1.1.1i and below are …
Calls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow the output length argument in some cases where the input length is close to the maximum permissable length for an integer on the platform. In such cases the return value from the function call will be 1 (indicating success), but the output length value will be negative. This could cause applications to behave incorrectly or crash. OpenSSL versions 1.1.1i and below are …
Affected versions of this crate suffered from an integer overflow bug when calculating the size of a buffer to use when encoding base64 using the encode_config_buf and encode_config functions. If the input string was large, this would cause a buffer to be allocated that was too small. Since this function writes to the buffer using unsafe code, it would allow an attacker to write beyond the buffer, causing memory corruption …
The implementation of tf.raw_ops.QuantizeAndDequantizeV4Grad is vulnerable to an integer overflow issue caused by converting a signed integer value to an unsigned one and then allocating memory based on this value. import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=[1.0,2.0], input=[1.0,1.0], input_min=[0.0], input_max=[10.0], axis=-100)
The implementation of tf.raw_ops.QuantizeAndDequantizeV4Grad is vulnerable to an integer overflow issue caused by converting a signed integer value to an unsigned one and then allocating memory based on this value. import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=[1.0,2.0], input=[1.0,1.0], input_min=[0.0], input_max=[10.0], axis=-100)
The implementation of tf.raw_ops.QuantizeAndDequantizeV4Grad is vulnerable to an integer overflow issue caused by converting a signed integer value to an unsigned one and then allocating memory based on this value. import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=[1.0,2.0], input=[1.0,1.0], input_min=[0.0], input_max=[10.0], axis=-100)
The implementation of tf.raw_ops.SparseReshape can be made to trigger an integral division by 0 exception: import tensorflow as tf tf.raw_ops.SparseReshape( input_indices = np.ones((1,3)), input_shape = np.array([1,1,0]), new_shape = np.array([1,0]))
The implementation of tf.raw_ops.SparseReshape can be made to trigger an integral division by 0 exception: import tensorflow as tf tf.raw_ops.SparseReshape( input_indices = np.ones((1,3)), input_shape = np.array([1,1,0]), new_shape = np.array([1,0]))
The implementation of tf.raw_ops.SparseReshape can be made to trigger an integral division by 0 exception: import tensorflow as tf tf.raw_ops.SparseReshape( input_indices = np.ones((1,3)), input_shape = np.array([1,1,0]), new_shape = np.array([1,0]))
An issue was discovered in the ws crate through 2020-09-25 for Rust. The outgoing buffer is not properly limited, leading to a remote memory-consumption attack.
The strided slice implementation in TFLite has a logic bug which can allow an attacker to trigger an infinite loop. This arises from newly introduced support for ellipsis in axis definition: for (int i = 0; i < effective_dims;) { if ((1 << i) & op_context->params->ellipsis_mask) { // … int ellipsis_end_idx = std::min(i + 1 + num_add_axis + op_context->input_dims - begin_count, effective_dims); // … for (; i < ellipsis_end_idx; ++i) …
The strided slice implementation in TFLite has a logic bug which can allow an attacker to trigger an infinite loop. This arises from newly introduced support for ellipsis in axis definition: for (int i = 0; i < effective_dims;) { if ((1 << i) & op_context->params->ellipsis_mask) { // … int ellipsis_end_idx = std::min(i + 1 + num_add_axis + op_context->input_dims - begin_count, effective_dims); // … for (; i < ellipsis_end_idx; ++i) …
The strided slice implementation in TFLite has a logic bug which can allow an attacker to trigger an infinite loop. This arises from newly introduced support for ellipsis in axis definition: for (int i = 0; i < effective_dims;) { if ((1 << i) & op_context->params->ellipsis_mask) { // … int ellipsis_end_idx = std::min(i + 1 + num_add_axis + op_context->input_dims - begin_count, effective_dims); // … for (; i < ellipsis_end_idx; ++i) …
The code for tf.raw_ops.SaveV2 does not properly validate the inputs and an attacker can trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.SaveV2( prefix=['tensorflow'], tensor_name=['v'], shape_and_slices=[], tensors=[1,2,3])
The code for tf.raw_ops.SaveV2 does not properly validate the inputs and an attacker can trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.SaveV2( prefix=['tensorflow'], tensor_name=['v'], shape_and_slices=[], tensors=[1,2,3])
The code for tf.raw_ops.SaveV2 does not properly validate the inputs and an attacker can trigger a null pointer dereference: import tensorflow as tf tf.raw_ops.SaveV2( prefix=['tensorflow'], tensor_name=['v'], shape_and_slices=[], tensors=[1,2,3])
Internal update-sigma function was implemented incorrectly and depending on debug-assertions it could've caused an incorrect result or panic for certain inputs.
Internal update-sigma function was implemented incorrectly and depending on debug-assertions it could've caused an incorrect result or panic for certain inputs.
An issue was discovered in the sodiumoxide crate starting with 0.2.0 and prior to 0.2.5 for Rust. generichash::Digest::eq compares itself to itself and thus has degenerate security properties.
An issue was discovered in the rand_core crate before 0.6.2 for Rust. Because read_u32_into and read_u64_into mishandle certain buffer-length checks, a random number generator may be seeded with too little data. The vulnerability was introduced in v0.6.0. The advisory doesn't apply to earlier minor version numbers. Because read_u32_into and read_u64_into mishandle certain buffer-length checks, a random number generator may be seeded with too little data.
An issue was discovered in the anymap crate through 0.12.1 for Rust. It violates soundness via conversion of a *u8 to a *u64.
The affected version of this crate's the bounded channel incorrectly assumes that Vec::from_iter has allocated capacity that same as the number of iterator elements. Vec::from_iter does not actually guarantee that and may allocate extra memory. The destructor of the bounded channel reconstructs Vec from the raw pointer based on the incorrect assumes described above. This is unsound and causing deallocation with the incorrect capacity when Vec::from_iter has allocated different sizes …
An issue was discovered in the iced-x86 crate through 1.10.3 for Rust. In Decoder::new(), slice.get_unchecked(slice.length()) is used unsafely.
Due to incomplete validation in MKL implementation of requantization, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.RequantizationRangePerChannel( input=[], input_min=[0,0,0,0,0], input_max=[1,1,1,1,1], clip_value_max=1)
Due to incomplete validation in MKL implementation of requantization, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.RequantizationRangePerChannel( input=[], input_min=[0,0,0,0,0], input_max=[1,1,1,1,1], clip_value_max=1)
Due to incomplete validation in MKL implementation of requantization, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.RequantizationRangePerChannel( input=[], input_min=[0,0,0,0,0], input_max=[1,1,1,1,1], clip_value_max=1)
Due to incomplete validation in tf.raw_ops.QuantizeV2, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.QuantizeV2( input=[1,2,3], min_range=[1,2], max_range=[], T=tf.qint32, mode='SCALED', round_mode='HALF_AWAY_FROM_ZERO', narrow_range=False, axis=1, ensure_minimum_range=3)
Due to incomplete validation in tf.raw_ops.QuantizeV2, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.QuantizeV2( input=[1,2,3], min_range=[1,2], max_range=[], T=tf.qint32, mode='SCALED', round_mode='HALF_AWAY_FROM_ZERO', narrow_range=False, axis=1, ensure_minimum_range=3)
Due to incomplete validation in tf.raw_ops.QuantizeV2, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.QuantizeV2( input=[1,2,3], min_range=[1,2], max_range=[], T=tf.qint32, mode='SCALED', round_mode='HALF_AWAY_FROM_ZERO', narrow_range=False, axis=1, ensure_minimum_range=3)
An attacker can trigger a denial of service via a segmentation fault in tf.raw_ops.MaxPoolGrad caused by missing validation: import tensorflow as tf tf.raw_ops.MaxPoolGrad( orig_input = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), orig_output = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), grad = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), ksize = [1, 16, 16, 1], strides = [1, 16, 18, 1], padding = "EXPLICIT", explicit_paddings = [0, 0, 14, 3, 15, 5, …
An attacker can trigger a denial of service via a segmentation fault in tf.raw_ops.MaxPoolGrad caused by missing validation: import tensorflow as tf tf.raw_ops.MaxPoolGrad( orig_input = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), orig_output = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), grad = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), ksize = [1, 16, 16, 1], strides = [1, 16, 18, 1], padding = "EXPLICIT", explicit_paddings = [0, 0, 14, 3, 15, 5, …
An attacker can trigger a denial of service via a segmentation fault in tf.raw_ops.MaxPoolGrad caused by missing validation: import tensorflow as tf tf.raw_ops.MaxPoolGrad( orig_input = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), orig_output = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), grad = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), ksize = [1, 16, 16, 1], strides = [1, 16, 18, 1], padding = "EXPLICIT", explicit_paddings = [0, 0, 14, 3, 15, 5, …
The tough library, prior to 0.7.1, does not properly verify the uniqueness of keys in the signatures provided to meet the threshold of cryptographic signatures. It allows someone with access to a valid signing key to create multiple valid signatures in order to circumvent TUF requiring a minimum threshold of unique keys before the metadata is considered valid. AWS would like to thank Erick Tryzelaar of the Google Fuchsia Team …
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated via the repr(Rust) type.
An issue was discovered in the buttplug crate before 1.0.4 for Rust. ButtplugFutureStateShared does not properly consider (!Send|!Sync) objects, leading to a data race.
rust-vmm vm-memory before 0.1.1 and 0.2.x before 0.2.1 allows attackers to cause a denial of service (loss of IP networking) because read_obj and write_obj do not properly access memory. This affects aarch64 (with musl or glibc) and x86_64 (with musl).
In versions of nanorand prior to 0.5.1, RandomGen implementations for standard unsigned integers could fail to properly generate numbers, due to using bit-shifting to truncate a 64-bit number, rather than just an as conversion. This often manifested as RNGs returning nothing but 0, including the cryptographically secure ChaCha random number generator.
Contao >=4.0.0 allows backend XSS via HTML attributes to an HTML field. Fixed in 4.4.56, 4.9.18, 4.11.7.
Affected versions of this crate exposed several methods which took self by immutable reference, despite the requesting the RenderDoc API to set a mutable value internally. This is technically unsound and calling these methods from multiple threads without synchronization could lead to unexpected and unpredictable behavior. The flaw was corrected in release 0.5.0.
If during the first dereference of Lazy the initialization function panics, subsequent dereferences will execute std::hints::unreachable_unchecked. Applications with panic = "abort" are not affected, as there will be no subsequent dereferences.
Affected versions of this crate use the time crate and the method Duration::seconds to parse the Max-Age duration cookie setting. This method will panic if the value is greater than 2^64/1000 and less than or equal to 2^64, which can result in denial of service for a client or server. This flaw was corrected by explicitly checking for the Max-Age being in this integer range and clamping the value to …
@asyncapi/java-spring-cloud-stream-template generates a Spring Cloud Stream (SCSt) microservice. arbitrary code injection was possible when an attacker controls the AsyncAPI document. An example is provided in GHSA-xj6r-2jpm-qvxp. There are no mitigations available and all users are advised to update.
Improper check in CheckboxGroup in com.vaadin:vaadin-checkbox-flow allows attackers to modify the value of a disabled Checkbox inside enabled CheckboxGroup component via unspecified vectors.
If custom root certificates were registered with a ClientBuilder, the hostname of the target server would not be validated against its presented leaf certificate. This issue was fixed by properly configuring the trust evaluation logic to perform that check.
All versions of rust-openssl prior to 0.9.0 contained numerous insecure defaults including off-by-default certificate verification and no API to perform hostname verification. Unless configured correctly by a developer, these defaults could allow an attacker to perform man-in-the-middle attacks. The problem was addressed in newer versions by enabling certificate verification by default and exposing APIs to perform hostname verification. Use the SslConnector and SslAcceptor types to take advantage of these new …
When used on Windows platforms, all versions of Hyper prior to 0.9.4 did not perform hostname verification when making HTTPS requests. This allows an attacker to perform MitM attacks by preventing any valid CA-issued certificate, even if there's a hostname mismatch. The problem was addressed by leveraging rust-openssl's built-in support for hostname verification.
HTTP pipelining issues and request smuggling attacks are possible due to incorrect Transfer encoding header parsing. It is possible conduct HTTP request smuggling attacks (CL:TE/TE:TE) by sending invalid Transfer Encoding headers. By manipulating the HTTP response the attacker could poison a web-cache, perform an XSS attack, or obtain sensitive information from requests other than their own.
hyper's HTTP server code had a flaw that incorrectly understands some requests with multiple transfer-encoding headers to have a chunked payload, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that understands the request payload boundary differently can result in "request smuggling" or "desync attacks".
Vulnerable versions of hyper allow GET requests to have bodies, even if there is no Transfer-Encoding or Content-Length header. As per the HTTP 1.1 specification, such requests do not have bodies, so the body will be interpreted as a separate HTTP request. This allows an attacker who can control the body and method of an HTTP request made by hyper to inject a request with headers that would not otherwise …
Affected versions of this crate did not properly detect invalid requests that could allow HTTP/1 request smuggling (HRS) attacks when running alongside a vulnerable front-end proxy server. This can result in leaked internal and/or user data, including credentials, when the front-end proxy is also vulnerable. Popular front-end proxies and load balancers already mitigate HRS attacks so it is recommended that they are also kept up to date; check your specific …
Affected versions of this crate switched the length and capacity arguments in the Vec::from_raw_parts() constructor, which could lead to memory corruption or data leakage.
The implementation of sparse reduction operations in TensorFlow can trigger accesses outside of bounds of heap allocated data: import tensorflow as tf x = tf.SparseTensor( indices=[[773, 773, 773], [773, 773, 773]], values=[1, 1], dense_shape=[337, 337, 337]) tf.sparse.reduce_sum(x, 1)
The implementation of sparse reduction operations in TensorFlow can trigger accesses outside of bounds of heap allocated data: import tensorflow as tf x = tf.SparseTensor( indices=[[773, 773, 773], [773, 773, 773]], values=[1, 1], dense_shape=[337, 337, 337]) tf.sparse.reduce_sum(x, 1)
The implementation of sparse reduction operations in TensorFlow can trigger accesses outside of bounds of heap allocated data: import tensorflow as tf x = tf.SparseTensor( indices=[[773, 773, 773], [773, 773, 773]], values=[1, 1], dense_shape=[337, 337, 337]) tf.sparse.reduce_sum(x, 1)
TFLite's GatherNd implementation does not support negative indices but there are no checks for this situation. Hence, an attacker can read arbitrary data from the heap by carefully crafting a model with negative values in indices. Similar issue exists in Gather implementation. import tensorflow as tf import numpy as np tf.compat.v1.disable_v2_behavior() params = tf.compat.v1.placeholder(name="params", dtype=tf.int64, shape=(1,)) indices = tf.compat.v1.placeholder(name="indices", dtype=tf.int64, shape=()) out = tf.gather(params, indices, name='out') with tf.compat.v1.Session() as sess: …
TFLite's GatherNd implementation does not support negative indices but there are no checks for this situation. Hence, an attacker can read arbitrary data from the heap by carefully crafting a model with negative values in indices. Similar issue exists in Gather implementation. import tensorflow as tf import numpy as np tf.compat.v1.disable_v2_behavior() params = tf.compat.v1.placeholder(name="params", dtype=tf.int64, shape=(1,)) indices = tf.compat.v1.placeholder(name="indices", dtype=tf.int64, shape=()) out = tf.gather(params, indices, name='out') with tf.compat.v1.Session() as sess: …
TFLite's GatherNd implementation does not support negative indices but there are no checks for this situation. Hence, an attacker can read arbitrary data from the heap by carefully crafting a model with negative values in indices. Similar issue exists in Gather implementation. import tensorflow as tf import numpy as np tf.compat.v1.disable_v2_behavior() params = tf.compat.v1.placeholder(name="params", dtype=tf.int64, shape=(1,)) indices = tf.compat.v1.placeholder(name="indices", dtype=tf.int64, shape=()) out = tf.gather(params, indices, name='out') with tf.compat.v1.Session() as sess: …
TFLite's expand_dims.cc contains a vulnerability which allows reading one element outside of bounds of heap allocated data: if (axis < 0) { axis = input_dims.size + 1 + axis; } TF_LITE_ENSURE(context, axis <= input_dims.size); TfLiteIntArray* output_dims = TfLiteIntArrayCreate(input_dims.size + 1); for (int i = 0; i < output_dims->size; ++i) { if (i < axis) { output_dims->data[i] = input_dims.data[i]; } else if (i == axis) { output_dims->data[i] = 1; } else …
TFLite's expand_dims.cc contains a vulnerability which allows reading one element outside of bounds of heap allocated data: if (axis < 0) { axis = input_dims.size + 1 + axis; } TF_LITE_ENSURE(context, axis <= input_dims.size); TfLiteIntArray* output_dims = TfLiteIntArrayCreate(input_dims.size + 1); for (int i = 0; i < output_dims->size; ++i) { if (i < axis) { output_dims->data[i] = input_dims.data[i]; } else if (i == axis) { output_dims->data[i] = 1; } else …
TFLite's expand_dims.cc contains a vulnerability which allows reading one element outside of bounds of heap allocated data: if (axis < 0) { axis = input_dims.size + 1 + axis; } TF_LITE_ENSURE(context, axis <= input_dims.size); TfLiteIntArray* output_dims = TfLiteIntArrayCreate(input_dims.size + 1); for (int i = 0; i < output_dims->size; ++i) { if (i < axis) { output_dims->data[i] = input_dims.data[i]; } else if (i == axis) { output_dims->data[i] = 1; } else …
It is possible to nest a tf.map_fn within another tf.map_fn call. However, if the input tensor is a RaggedTensor and there is no function signature provided, code assumes the output is a fully specified tensor and fills output buffer with uninitialized contents from the heap: import tensorflow as tf x = tf.ragged.constant([[1,2,3], [4,5], [6]]) t = tf.map_fn(lambda r: tf.map_fn(lambda y: r, r), x) z = tf.ragged.constant([[[1,2,3],[1,2,3],[1,2,3]],[[4,5],[4,5]],[[6]]]) The t and z …
It is possible to nest a tf.map_fn within another tf.map_fn call. However, if the input tensor is a RaggedTensor and there is no function signature provided, code assumes the output is a fully specified tensor and fills output buffer with uninitialized contents from the heap: import tensorflow as tf x = tf.ragged.constant([[1,2,3], [4,5], [6]]) t = tf.map_fn(lambda r: tf.map_fn(lambda y: r, r), x) z = tf.ragged.constant([[[1,2,3],[1,2,3],[1,2,3]],[[4,5],[4,5]],[[6]]]) The t and z …
It is possible to nest a tf.map_fn within another tf.map_fn call. However, if the input tensor is a RaggedTensor and there is no function signature provided, code assumes the output is a fully specified tensor and fills output buffer with uninitialized contents from the heap: import tensorflow as tf x = tf.ragged.constant([[1,2,3], [4,5], [6]]) t = tf.map_fn(lambda r: tf.map_fn(lambda y: r, r), x) z = tf.ragged.constant([[[1,2,3],[1,2,3],[1,2,3]],[[4,5],[4,5]],[[6]]]) The t and z …
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to BoostedTreesSparseCalculateBestFeatureSplit: import tensorflow as tf tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit( node_id_range=[0,10], stats_summary_indices=[[1, 2, 3, 0x1000000]], stats_summary_values=[1.0], stats_summary_shape=[1,1,1,1], l1=l2=[1.0], tree_complexity=[0.5], min_node_weight=[1.0], logits_dimension=3, split_type='inequality')
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to BoostedTreesSparseCalculateBestFeatureSplit: import tensorflow as tf tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit( node_id_range=[0,10], stats_summary_indices=[[1, 2, 3, 0x1000000]], stats_summary_values=[1.0], stats_summary_shape=[1,1,1,1], l1=l2=[1.0], tree_complexity=[0.5], min_node_weight=[1.0], logits_dimension=3, split_type='inequality')
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to BoostedTreesSparseCalculateBestFeatureSplit: import tensorflow as tf tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit( node_id_range=[0,10], stats_summary_indices=[[1, 2, 3, 0x1000000]], stats_summary_values=[1.0], stats_summary_shape=[1,1,1,1], l1=l2=[1.0], tree_complexity=[0.5], min_node_weight=[1.0], logits_dimension=3, split_type='inequality')
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.UpperBound: import tensorflow as tf tf.raw_ops.UpperBound( sorted_input=[1,2,3], values=tf.constant(value=[[0,0,0],[1,1,1],[2,2,2]],dtype=tf.int64), out_type=tf.int64)
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.UpperBound: import tensorflow as tf tf.raw_ops.UpperBound( sorted_input=[1,2,3], values=tf.constant(value=[[0,0,0],[1,1,1],[2,2,2]],dtype=tf.int64), out_type=tf.int64)
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.UpperBound: import tensorflow as tf tf.raw_ops.UpperBound( sorted_input=[1,2,3], values=tf.constant(value=[[0,0,0],[1,1,1],[2,2,2]],dtype=tf.int64), out_type=tf.int64)
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.SdcaOptimizerV2
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.SdcaOptimizerV2
An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.SdcaOptimizerV2
An attacker can trigger a read from outside of bounds of heap allocated data by sending invalid arguments to tf.raw_ops.ResourceScatterUpdate: import tensorflow as tf v = tf.Variable([b'vvv']) tf.raw_ops.ResourceScatterUpdate( resource=v.handle, indices=[0], updates=['1', '2', '3', '4', '5'])
An attacker can trigger a read from outside of bounds of heap allocated data by sending invalid arguments to tf.raw_ops.ResourceScatterUpdate: import tensorflow as tf v = tf.Variable([b'vvv']) tf.raw_ops.ResourceScatterUpdate( resource=v.handle, indices=[0], updates=['1', '2', '3', '4', '5'])
An attacker can trigger a read from outside of bounds of heap allocated data by sending invalid arguments to tf.raw_ops.ResourceScatterUpdate: import tensorflow as tf v = tf.Variable([b'vvv']) tf.raw_ops.ResourceScatterUpdate( resource=v.handle, indices=[0], updates=['1', '2', '3', '4', '5'])
If the arguments to tf.raw_ops.RaggedGather don't determine a valid ragged tensor code can trigger a read from outside of bounds of heap allocated buffers. import tensorflow as tf tf.raw_ops.RaggedGather( params_nested_splits = [0,0,0], params_dense_values = [1,1], indices = [0,0,9,0,0], OUTPUT_RAGGED_RANK=0) In debug mode, the same code triggers a CHECK failure.
If the arguments to tf.raw_ops.RaggedGather don't determine a valid ragged tensor code can trigger a read from outside of bounds of heap allocated buffers. import tensorflow as tf tf.raw_ops.RaggedGather( params_nested_splits = [0,0,0], params_dense_values = [1,1], indices = [0,0,9,0,0], OUTPUT_RAGGED_RANK=0) In debug mode, the same code triggers a CHECK failure.
If the arguments to tf.raw_ops.RaggedGather don't determine a valid ragged tensor code can trigger a read from outside of bounds of heap allocated buffers. import tensorflow as tf tf.raw_ops.RaggedGather( params_nested_splits = [0,0,0], params_dense_values = [1,1], indices = [0,0,9,0,0], OUTPUT_RAGGED_RANK=0) In debug mode, the same code triggers a CHECK failure.
An attacker can trigger a crash via a CHECK-fail in debug builds of TensorFlow using tf.raw_ops.ResourceGather or a read from outside the bounds of heap allocated data in the same API in a release build: import tensorflow as tf tensor = tf.constant(value=[[1,2],[3,4],[5,6]],shape=(3,2),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=10, validate_indices=False)
An attacker can trigger a crash via a CHECK-fail in debug builds of TensorFlow using tf.raw_ops.ResourceGather or a read from outside the bounds of heap allocated data in the same API in a release build: import tensorflow as tf tensor = tf.constant(value=[[1,2],[3,4],[5,6]],shape=(3,2),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=10, validate_indices=False)
An attacker can trigger a crash via a CHECK-fail in debug builds of TensorFlow using tf.raw_ops.ResourceGather or a read from outside the bounds of heap allocated data in the same API in a release build: import tensorflow as tf tensor = tf.constant(value=[[1,2],[3,4],[5,6]],shape=(3,2),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=10, validate_indices=False)
The implementation for tf.raw_ops.FractionalAvgPoolGrad can be tricked into accessing data outside of bounds of heap allocated buffers: import tensorflow as tf tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=[0,1,2,3], out_backprop = np.array([[[[541],[541]],[[541],[541]]]]), row_pooling_sequence=[0, 0, 0, 0, 0], col_pooling_sequence=[-2, 0, 0, 2, 0], overlapping=True)
The implementation for tf.raw_ops.FractionalAvgPoolGrad can be tricked into accessing data outside of bounds of heap allocated buffers: import tensorflow as tf tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=[0,1,2,3], out_backprop = np.array([[[[541],[541]],[[541],[541]]]]), row_pooling_sequence=[0, 0, 0, 0, 0], col_pooling_sequence=[-2, 0, 0, 2, 0], overlapping=True)
The implementation for tf.raw_ops.FractionalAvgPoolGrad can be tricked into accessing data outside of bounds of heap allocated buffers: import tensorflow as tf tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=[0,1,2,3], out_backprop = np.array([[[[541],[541]],[[541],[541]]]]), row_pooling_sequence=[0, 0, 0, 0, 0], col_pooling_sequence=[-2, 0, 0, 2, 0], overlapping=True)
Serializing of headers to the socket did not filter the values for newline bytes (\r or \n), which allowed for header values to split a request or response. People would not likely include newlines in the headers in their own applications, so the way for most people to exploit this is if an application constructs headers based on unsanitized user input. This issue was fixed by replacing all newline characters …
An issue was discovered in the telemetry crate through 0.1.2 for Rust. There is a drop of uninitialized memory if a value.clone() call panics within misc::vec_with_size()
An issue was discovered in the autorand crate before 0.2.3 for Rust. Because of impl Random on arrays, uninitialized memory can be dropped when a panic occurs, leading to memory corruption.
An issue was discovered in the adtensor crate through 0.0.3 for Rust. There is a drop of uninitialized memory via the FromIterator implementation for Vector and Matrix.
The implementations of pooling in TFLite are vulnerable to division by 0 errors as there are no checks for divisors not being 0.
The implementations of pooling in TFLite are vulnerable to division by 0 errors as there are no checks for divisors not being 0.
The implementations of pooling in TFLite are vulnerable to division by 0 errors as there are no checks for divisors not being 0.
The implementation of division in TFLite is vulnerable to a division by 0 error There is no check that the divisor tensor does not contain zero elements.
The implementation of division in TFLite is vulnerable to a division by 0 error There is no check that the divisor tensor does not contain zero elements.
The implementation of division in TFLite is vulnerable to a division by 0 error There is no check that the divisor tensor does not contain zero elements.
An attacker can craft a TFLite model that would trigger a division by zero error in LSH implementation. int RunningSignBit(const TfLiteTensor* input, const TfLiteTensor* weight, float seed) { int input_item_bytes = input->bytes / SizeOfDimension(input, 0); // … } There is no check that the first dimension of the input is non zero.
An attacker can craft a TFLite model that would trigger a division by zero error in LSH implementation. int RunningSignBit(const TfLiteTensor* input, const TfLiteTensor* weight, float seed) { int input_item_bytes = input->bytes / SizeOfDimension(input, 0); // … } There is no check that the first dimension of the input is non zero.
An attacker can craft a TFLite model that would trigger a division by zero error in LSH implementation. int RunningSignBit(const TfLiteTensor* input, const TfLiteTensor* weight, float seed) { int input_item_bytes = input->bytes / SizeOfDimension(input, 0); // … } There is no check that the first dimension of the input is non zero.
An attacker can cause denial of service in applications serving models using tf.raw_ops.UnravelIndex by triggering a division by 0: import tensorflow as tf tf.raw_ops.UnravelIndex(indices=-1, dims=[1,0,2])
An attacker can cause denial of service in applications serving models using tf.raw_ops.UnravelIndex by triggering a division by 0: import tensorflow as tf tf.raw_ops.UnravelIndex(indices=-1, dims=[1,0,2])
An attacker can cause denial of service in applications serving models using tf.raw_ops.UnravelIndex by triggering a division by 0: import tensorflow as tf tf.raw_ops.UnravelIndex(indices=-1, dims=[1,0,2])
An issue was discovered in the pancurses crate through 0.16.1 for Rust. printw and mvprintw have format string vulnerabilities.
The implementation of tf.raw_ops.SparseDenseCwiseDiv is vulnerable to a division by 0 error: import tensorflow as tf import numpy as np tf.raw_ops.SparseDenseCwiseDiv( sp_indices=np.array([[4]]), sp_values=np.array([-400]), sp_shape=np.array([647.]), dense=np.array([0]))
The implementation of tf.raw_ops.SparseDenseCwiseDiv is vulnerable to a division by 0 error: import tensorflow as tf import numpy as np tf.raw_ops.SparseDenseCwiseDiv( sp_indices=np.array([[4]]), sp_values=np.array([-400]), sp_shape=np.array([647.]), dense=np.array([0]))
The implementation of tf.raw_ops.SparseDenseCwiseDiv is vulnerable to a division by 0 error: import tensorflow as tf import numpy as np tf.raw_ops.SparseDenseCwiseDiv( sp_indices=np.array([[4]]), sp_values=np.array([-400]), sp_shape=np.array([647.]), dense=np.array([0]))
Affected versions of this crate did not properly reset a streaming state. Resetting a streaming state, without finalising it first, creates incorrect results. The flaw was corrected by not first checking if the state had already been reset, when calling reset().
pleaser before 0.4.0 allows a local unprivileged attacker to gain knowledge about the existence of files or directories in privileged locations via the search_path function, the –check option, or the -d option.
fake-static allows converting a reference with any lifetime into a reference with 'static lifetime without the unsafe keyword. Internally, this crate does not use unsafe code, it instead exploits a soundness bug in rustc
Affected versions of this crate did not properly verify ed25519 signatures. Any signature with a correct length was considered valid. This allows an attacker to impersonate any node identity.
Affected versions of this crate caused traps and/or memory unsafety by zero-initializing references. They also could lead to uninitialized memory being dropped if the field for which the offset is requested was behind a deref coercion, and that deref coercion caused a panic. The flaw was corrected by using MaybeUninit.
tokio-rustls does not call process_new_packets immediately after read, so the expected termination condition wants_read always returns true. As long as new incoming data arrives faster than it is processed and the reader does not return pending, data will be buffered. This may cause DoS.
native_cpuid::cpuid_count() exposes the unsafe __cpuid_count() intrinsic from core::arch::x86 or core::arch::x86_64 as a safe function, and uses it internally, without checking the safety requirement: The CPU the program is currently running on supports the function being called. CPUID is available in most, but not all, x86/x86_64 environments. The crate compiles only on these architectures, so others are unaffected. This issue is mitigated by the fact that affected programs are expected to …
An issue was discovered in the ozone crate through version 0.1.0 for Rust. Memory safety is violated because of the dropping of uninitialized memory.
A double free can occur in remove_set upon a panic in a Drop impl. When removing a set of elements, ptr::drop_in_place is called on each of the element to be removed. If the Drop impl of one of these elements panics then the previously dropped elements can be dropped again.
The clone_from implementation for IdMap drops the values present in the map and then begins cloning values from the other map. If a .clone() call pancics, then the afformentioned dropped elements can be freed again. get_or_insert get_or_insert reserves space for a value, before calling the user provided insertion function f. If the function f panics then uninitialized or previously freed memory can be dropped. remove_set When removing a set of …
A double free can occur in get_or_insert upon a panic of a user-provided f function. get_or_insert reserves space for a value, before calling the user provided insertion function f. If the function f panics then uninitialized or previously freed memory can be dropped.
An issue was discovered in the through crate through 2021-02-18 for Rust. There is a double free (in through and through_and) upon a panic of the map function.
Affected versions of sys-info use a static, global, list to store temporary disk information while running. The function that cleans up this list, DFCleanup, assumes a single threaded environment and will try to free the same memory twice in a multithreaded environment. This results in consistent double-frees and segfaults when calling sys_info::disk_info from multiple threads at once. The issue was fixed by moving the global variable into a local scope.
Attempting to call grow on a spilled SmallVec with a value equal to the current capacity causes it to free the existing data. This performs a double free immediately and may lead to use-after-free on subsequent accesses to the SmallVec contents. An attacker that controls the value passed to grow may exploit this flaw to obtain memory contents or gain remote code execution.
If an iterator passed to SmallVec::insert_many panicked in Iterator::next, destructors were run during unwinding while the vector was in an inconsistent state, possibly causing a double free (a destructor running on two copies of the same value). This is fixed in smallvec 0.6.3 by ensuring that the vector's length is not updated to include moved items until they have been removed from their original positions. Items may now be leaked …
An issue was discovered in the slice-deque crate through 2021-02-19 for Rust. A double drop can occur in SliceDeque::drain_filter upon a panic in a predicate function.
An issue was discovered in the ordnung crate through version 0.0.1 for Rust. compact::Vec violates memory safety via a remove() double free.
Affected versions of this crate did not properly implements the Matrix::zip_elements method, which causes an double free when the given trait implementation might panic. This allows an attacker to corrupt or take control of the memory.
An issue was discovered in the insert_many crate through 2021-01-26 for Rust. Elements may be dropped twice if a .next() method panics.
An issue was discovered in the http crate before 0.1.20 for Rust. The HeaderMap::Drain API can use a raw pointer, defeating soundness.
Affected versions of this crate did not guard against panic within the user-provided function f (2nd parameter of fn map_array), and thus panic within f causes double drop of a single object. The flaw was corrected in the 0.4.0 release by wrapping the object vulnerable to a double drop within ManuallyDrop.
An issue was discovered in the fil-ocl crate through 2021-01-04 for Rust. From can lead to a double free.
An issue was discovered in the endian_trait crate through 2021-01-04 for Rust. A double drop can occur when a user-provided Endian impl panics.
Even if an element is popped from a queue, crossbeam would run its destructor inside the epoch-based garbage collector. This is a source of double frees. The flaw was corrected by wrapping elements inside queues in a ManuallyDrop.
Upon panic in a user-provided function f, fn mutate() & fn mutate2 drops twice a same object. Affected versions of this crate did not guard against double drop while temporarily duplicating an object's ownership with ptr::read(). Dropping a same object can result in memory corruption. The flaw was corrected in version "0.9.11" by fixing the code to abort upon panic.
An issue was discovered in the basic_dsp_matrix crate before 0.9.2 for Rust. When a TransformContent panic occurs, a double drop can be performed.
Affected versions of this crate did not guard against potential panics that may happen from user-provided functions T::default() and T::drop(). Panic within T::default() leads to dropping uninitialized T, when it is invoked from common::Slice::<T, H>::new(). Panic within T::drop() leads to double drop of T, when it is invoked either from common::SliceVec::<T, H>::resize_with() or common::SliceVec::<T, H>::resize() Either case causes memory corruption in the heap memory.
An issue was discovered in the alpm-rs crate through 2020-08-20 for Rust. StrcCtx performs improper memory deallocation.
An issue was discovered in the algorithmica crate through 2021-03-07 for Rust. In the affected versions of this crate, merge_sort::merge() wildly duplicates and drops ownership of T without guarding against double-free. Due to such implementation, simply invoking merge_sort::merge() on Vec<T: Drop> can cause double free bugs.
The implementation of fully connected layers in TFLite is vulnerable to a division by zero error: const int batch_size = input_size / filter->dims->data[1]; An attacker can craft a model such that filter->dims->data[1] is 0.
The implementation of fully connected layers in TFLite is vulnerable to a division by zero error: const int batch_size = input_size / filter->dims->data[1]; An attacker can craft a model such that filter->dims->data[1] is 0.
The implementation of fully connected layers in TFLite is vulnerable to a division by zero error: const int batch_size = input_size / filter->dims->data[1]; An attacker can craft a model such that filter->dims->data[1] is 0.
Most implementations of convolution operators in TensorFlow are affected by a division by 0 vulnerability where an attacker can trigger a denial of service via a crash: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Conv2D( input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), strides = [1, 1, 1, 1], padding = "SAME")
Most implementations of convolution operators in TensorFlow are affected by a division by 0 vulnerability where an attacker can trigger a denial of service via a crash: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Conv2D( input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), strides = [1, 1, 1, 1], padding = "SAME")
Most implementations of convolution operators in TensorFlow are affected by a division by 0 vulnerability where an attacker can trigger a denial of service via a crash: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Conv2D( input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), strides = [1, 1, 1, 1], padding = "SAME")
An attacker can cause a floating point exception by calling inplace operations with crafted arguments that would result in a division by 0: import tensorflow as tf tf.raw_ops.InplaceSub(x=[],i=[-99,-1,-1],v=[1,1,1])
An attacker can cause a floating point exception by calling inplace operations with crafted arguments that would result in a division by 0: import tensorflow as tf tf.raw_ops.InplaceSub(x=[],i=[-99,-1,-1],v=[1,1,1])
An attacker can cause a floating point exception by calling inplace operations with crafted arguments that would result in a division by 0: import tensorflow as tf tf.raw_ops.InplaceSub(x=[],i=[-99,-1,-1],v=[1,1,1])
The implementation of tf.raw_ops.ResourceScatterDiv is vulnerable to a division by 0 error: import tensorflow as tf v= tf.Variable([1,2,3]) tf.raw_ops.ResourceScatterDiv( resource=v.handle, indices=[1], updates=[0])
The implementation of tf.raw_ops.ResourceScatterDiv is vulnerable to a division by 0 error: import tensorflow as tf v= tf.Variable([1,2,3]) tf.raw_ops.ResourceScatterDiv( resource=v.handle, indices=[1], updates=[0])
The implementation of tf.raw_ops.ResourceScatterDiv is vulnerable to a division by 0 error: import tensorflow as tf v= tf.Variable([1,2,3]) tf.raw_ops.ResourceScatterDiv( resource=v.handle, indices=[1], updates=[0])
An attacker can trigger a crash via a floating point exception in tf.raw_ops.ResourceGather: import tensorflow as tf tensor = tf.constant(value=[[]],shape=(0,1),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=1, validate_indices=False)
An attacker can trigger a crash via a floating point exception in tf.raw_ops.ResourceGather: import tensorflow as tf tensor = tf.constant(value=[[]],shape=(0,1),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=1, validate_indices=False)
An attacker can trigger a crash via a floating point exception in tf.raw_ops.ResourceGather: import tensorflow as tf tensor = tf.constant(value=[[]],shape=(0,1),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=1, validate_indices=False)
TensorFlow is an end-to-end open source platform for machine learning.We have patched the issue in GitHub commit aa137691ee23f03638867151f74935f. The fix will be included in TensorFlow We will also cherrypick this commit on TensorFlow, TensorFlow, and TensorFlow, as these are also affected and still in supported range.
TensorFlow is an end-to-end open source platform for machine learning. There is no check that the divisor tensor does not contain zero elements. We have patched the issue in GitHub commit 1e206baedf8bef0334cca3eb92bab134ef525a28. The fix will be included in TensorFlow We will also cherrypick this commit on TensorFlow, TensorFlow, and TensorFlow, as these are also affected and still in supported range.
Byte_struct stack and unpack structure as raw bytes with packed or bit field layout. An issue was discovered in the byte_struct crate before 0.6.1 for Rust. There can be a drop of uninitialized memory if a certain deserialization method panics.
Affected versions of this crate unconditionally implements Send for Bucket2. This allows sending non-Send types to other threads. This can lead to data races when non Send types like Cell<T> or Rc<T> are contained inside Bucket2 and sent across thread boundaries. The data races can potentially lead to memory corruption (as demonstrated in the PoC from the original report issue). The flaw was corrected in commit 15b2828 by adding a …
Affected versions of this crate unconditionally implement Sync for SyncRef<T>. This definition allows data races if &T is accessible through &SyncRef. SyncRef<T> derives Clone and Debug, and the default implementations of those traits access &T by invoking T::clone() & T::fmt(). It is possible to create data races & undefined behavior by concurrently invoking SyncRef<T>::clone() or SyncRef<T>::fmt() from multiple threads with T: !Sync.
Affected versions of this crate unconditionally implemented Send & Sync for types PinSlab<T> & Unordered<T, S>. This allows sending non-Send types to other threads and concurrently accessing non-Sync types from multiple threads. This can result in a data race & memory corruption when types that provide internal mutability without synchronization are contained within PinSlab<T> or Unordered<T, S> and accessed concurrently from multiple threads. The flaw was corrected in commits 92f40b4 …
Affected versions of this crate unconditionally implemented Send & Sync for types PinSlab<T> & Unordered<T, S>. This allows sending non-Send types to other threads and concurrently accessing non-Sync types from multiple threads. This can result in a data race & memory corruption when types that provide internal mutability without synchronization are contained within PinSlab<T> or Unordered<T, S> and accessed concurrently from multiple threads. The flaw was corrected in commits 92f40b4 …
Affected versions of this crate unconditionally implemented Sync trait for TryMutex type. This allows users to put non-Send T type in TryMutex and send it to another thread, which can cause a data race. The flaw was corrected in the 0.3.0 release by adding T: Send bound for the Sync trait implementation.
An issue was discovered in the toolshed crate through 2020-11-15 for Rust. In CopyCell, the Send trait lacks bounds on the contained type.
tiny_future contains a light-weight implementation of Futures. The Future type it has lacked bound on its Send and Sync traits. This allows for a bug where non-thread safe types such as Cell can be used in Futures and cause data races in concurrent programs. The flaw was corrected in commit c791919 by adding trait bounds to Future's Send and Sync.
Affected versions of this crate unconditionally implemented Send for ReadTicket & WriteTicket. This allows to send non-Send T to other threads. This can allows creating data races by cloning types with internal mutability and sending them to other threads (as T of ReadTicket/WriteTicket). Such data races can cause memory corruption or other undefined behavior. The flaw was corrected in commit a986a93 by adding T: Send bounds to Send impls of …
Affected versions of this crate unconditionally implemented Send for ReadTicket<T> & WriteTicket<T>. This allows to send non-Send T to other threads. This can allows creating data races by cloning types with internal mutability and sending them to other threads (as T of ReadTicket<T>/WriteTicket<T>). Such data races can cause memory corruption or other undefined behavior. The flaw was corrected in commit a986a93 by adding T: Send bounds to Send impls of …
An issue was discovered in the thex crate through 2020-12-08 for Rust. Thex allows cross-thread data races of non-Send types.
An issue was discovered in the slock crate through 2020-11-17 for Rust. Slock unconditionally implements Send and Sync.
Affected versions of this crate unconditionally implement Send/Sync for SyncChannel. SyncChannel doesn't provide access to &T but merely serves as a channel that consumes and returns owned T. Users can create UB in safe Rust by sending T: !Send to other threads with SyncChannel::send/recv APIs. Using T = Arc<Cell<_> allows to create data races (which can lead to memory corruption), and using T = MutexGuard allows to unlock a mutex …
An issue was discovered in the scottqueue crate through 2020-11-15 for Rust. There are unconditional implementations of Send and Sync for Queue. This allows (1) creating data races to a T: !Sync and (2) sending T: !Send to other threads, resulting in memory corruption or other undefined behavior.
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated via UnlockNotification.
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated via VTab / VTabCursor.
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated via an Auxdata API data race.
An issue was discovered in the rusqlite crate before 0.23.0 for Rust. Memory safety can be violated via create_module.
Affected versions of rusb did not require UsbContext to implement Send and Sync. However, through Device and DeviceHandle it is possible to use UsbContexts across threads. This issue allows non-thread safe UsbContext types to be used concurrently leading to data races and memory corruption. The issue was fixed by adding Send and Sync bounds to UsbContext.
The affected version of rulinalg has incorrect lifetime boundary definitions for RowMut::raw_slice and RowMut::raw_slice_mut. They do not conform with Rust's borrowing rule and allows the user to create multiple mutable references to the same location. This may result in unexpected calculation result and data race if both references are used at the same time.
The affected version of rocket contains a Clone trait implementation of LocalRequest that reuses the pointer to inner Request object. This causes data race in rare combinations of APIs if the original and the cloned objects are modified at the same time.
ARefss<'a, V> is a type that is assumed to contain objects that are Send + Sync. In the affected versions of this crate, Send/Sync traits are unconditionally implemented for ARefss<'a, V>. By using the ARefss::map() API, we can insert a !Send or !Sync object into ARefss<'a, V>. After that, it is possible to create a data race to the inner object of ARefss<'a, V>, which can lead to undefined behavior …
Affected versions of this crate unconditionally implement Send/Sync for RcuCell<T>. This allows users to send T: !Send to other threads (while T enclosed within RcuCell<T>), and allows users to concurrently access T: !Sync by using the APIs of RcuCell<T> that provide access to &T. This can result in memory corruption caused by data races.
In the affected versions of this crate, LockWeak unconditionally implemented Send with no trait bounds on T. LockWeak doesn't own T and only provides &T. This allows concurrent access to a non-Sync T, which can cause undefined behavior like data races.
Affected versions of the noise_search crate unconditionally implement Send/Sync for MvccRwLock. This can lead to data races when types that are either !Send or !Sync (e.g. Rc<T>, Arc<Cell<_>>) are contained inside MvccRwLock and sent across thread boundaries. The data races can potentially lead to memory corruption (as demonstrated in the PoC from the original report issue). Also, safe APIs of MvccRwLock allow aliasing violations by allowing &T and LockResult<MutexGuard<Box<T>>> to …
Affected versions of this crate unconditionally implemented Send for types used in queue implementations (InnerSend<RW, T>, InnerRecv<RW, T>, FutInnerSend<RW, T>, FutInnerRecv<RW, T>). This allows users to send non-Send types to other threads, which can lead to data race bugs or other undefined behavior. The flaw was corrected in v0.1.7 by adding T: Send bound to to the Send impl of four data types explained above.
Affected versions of this crate unconditionally implemented Send for types used in queue implementations (InnerSend<RW, T>, InnerRecv<RW, T>, FutInnerSend<RW, T>, FutInnerRecv<RW, T>). This allows users to send non-Send types to other threads, which can lead to data race bugs or other undefined behavior.
Affected versions of multiqueue unconditionally implemented Send for types used in queue implementations (InnerSend<RW, T>, InnerRecv<RW, T>, FutInnerSend<RW, T>, FutInnerRecv<RW, T>). This allows users to send non-Send types to other threads, which can lead to data race bugs or other undefined behavior.
Shared data structure in model crate implements Send and Sync traits regardless of the inner type. This allows safe Rust code to trigger a data race, which is undefined behavior in Rust. Users are advised to treat Shared as an unsafe type. It should not be used outside of the testing context, and care must be taken so that the testing code does not have a data race besides a …
Shared data structure in model crate implements Send and Sync traits regardless of the inner type. This allows safe Rust code to trigger a data race, which is undefined behavior in Rust. Users are advised to treat Shared as an unsafe type. It should not be used outside of the testing context, and care must be taken so that the testing code does not have a data race besides a …
The ImmediateIO and TransactionalIO types implement Sync for all contained Expander<EI> types regardless of if the Expander itself is safe to use across threads. As the IO types allow retrieving the Expander, this can lead to non-thread safe types being sent across threads as part of the Expander leading to data races.
Affected versions of this crate unconditionally implemented Sync and Send traits for MPMCConsumer and MPMCProducer types. This allows users to send types that do not implement Send trait across thread boundaries, which can cause a data race. The flaw was corrected in the 2.0.1 release by adding T: Send bound to affected Sync/Send trait implementations.
An issue was discovered in the lock_api crate before 0.4.2 for Rust. A data race can occur because of MappedRwLockWriteGuard unsoundness.
An issue was discovered in the lock_api crate before 0.4.2 for Rust. A data race can occur because of MappedRwLockReadGuard unsoundness.
An issue was discovered in the lock_api crate before 0.4.2 for Rust. A data race can occur because of MappedMutexGuard unsoundness.
An issue was discovered in the lock_api crate before 0.4.2 for Rust. A data race can occur because of RwLockWriteGuard unsoundness.
An issue was discovered in the lock_api crate before 0.4.2 for Rust. A data race can occur because of RwLockReadGuard unsoundness.
Affected versions of this crate implements Send for Decoder<R> for any R: Read. This allows Decoder<R> to contain R: !Send and carry (move) it to another thread. This can result in undefined behavior such as memory corruption from data race on R, or dropping R = MutexGuard<_> from a thread that didn't lock the mutex. The flaw was corrected in commit a34d6e1 by adding trait bound R: Send to the …
lexer is a plugin based lexical reader.Affected versions of this crate implements Sync for ReaderResult<T, E> with the trait bound T: Send, E: Send. Since matching on the public enum ReaderResult<T, E> provides access to &T & &E, allowing data race to a non-Sync type T or E. This can result in a memory corruption when multiple threads concurrently access &T or &E. Suggested fix for the bug is change …
An issue was discovered in the lever crate before 0.1.1 for Rust. AtomicBox implements the Send and Sync traits for all types T. This allows non-Send types such as Rc and non-Sync types such as Cell to be used across thread boundaries which can trigger undefined behavior and memory corruption.
Affected versions of this crate implemented Sync for LateStatic with T: Send, so that it is possible to create a data race to a type T: Send + !Sync (e.g. Cell). This can result in a memory corruption or other kinds of undefined behavior. The flaw was corrected in commit 11f396c by replacing the T: Send bound to T: Sync bound in the Sync impl for LateStatic.
An issue was discovered in the im crate prior to 15.1.0 for Rust. Because TreeFocus does not have bounds on its Send trait or Sync trait, a data race can occur.
Affected versions of hashconsing implements Send/Sync for its HConsed type without restricting it to Sendable types and Syncable types. This allows non-Sync types such as Cell to be shared across threads leading to undefined behavior and memory corruption in concurrent programs.
In the affected versions of this crate, ImageChunkMut<'_, T> unconditionally implements Send and Sync, allowing to create data races. This can result in a memory corruption or undefined behavior when non thread-safe types are moved and referenced across thread boundaries. The flaw was corrected in commit e7fb2f5 by adding T: Send bound to the Send impl and adding T: Sync bound to the Sync impl.
The Generator type is an iterable which uses a generator function that yields values. In affected versions of the crate, the provided function yielding values had no Send bounds despite the Generator itself implementing Send. The generator function lacking a Send bound means that types that are dangerous to send across threads such as Rc could be sent as part of a generator, potentially leading to data races.
The Generator type is an iterable which uses a generator function that yields values. In affected versions of the crate, the provided function yielding values had no Send bounds despite the Generator itself implementing Send. The generator function lacking a Send bound means that types that are dangerous to send across threads such as Rc could be sent as part of a generator, potentially leading to data races.
GenericMutexGuard was given the Sync auto trait as long as T is Send due to its contained members. However, since the guard is supposed to represent an acquired lock and allows concurrent access to the underlying data from different threads, it should only be Sync when the underlying data is. This is a soundness issue and allows data races, potentially leading to crashes and segfaults from safe Rust code. The …
An issue was discovered in the dces crate through 2020-12-09 for Rust. The World type is marked as Send but lacks bounds on its EntityStore and ComponentStore. This allows non-thread safe EntityStore and ComponentStores to be sent across threads and cause data races.
Affected versions of this crate unconditionally implement Send/Sync for ConVec<T>. This allows users to insert T that is not Send or not Sync. This allows users to create data races by using non-Send types like Arc<Cell<>> or Rc<> as T in ConVec<T>. It is also possible to create data races by using types like Cell<> or RefCell<> as T (types that are Send but not Sync). Such data races can …
Affected versions of conquer-once implements Sync for its OnceCell type without restricting it to Sendable types. This allows non-Send but Sync types such as MutexGuard to be sent across threads leading to undefined behavior and memory corruption in concurrent programs. The issue was fixed by adding a Send constraint to OnceCell.
An issue was discovered in the concread crate before 0.2.6 for Rust. Attackers can cause an ARCache<K,V> data race by sending types that do not implement Send/Sync.
An issue was discovered in the cgc crate through 2020-12-10 for Rust. Ptr implements Send and Sync for all types
An issue was discovered in the cache crate through 2020-11-24 for Rust. Affected versions of this crate unconditionally implement Send/Sync for Cache<K>. This allows users to insert K that is not Send or not Sync. This allows users to create data races by using non-Send types like Arc<Cell<T>> or Rc<T> as K in Cache<K>. It is also possible to create data races by using types like Cell<T> or RefCell<T> (types …
An issue was discovered in the bunch crate through 2020-11-12 for Rust. Affected versions of this crate unconditionally implements Send/Sync for Bunch<T>. This allows users to insert T: !Sync to Bunch<T>. It is possible to create a data race to a T: !Sync by invoking the Bunch::get() API (which returns &T) from multiple threads. It is also possible to send T: !Send to other threads by inserting T inside Bunch<T> …
An issue was discovered in the beef crate before 0.5.0 for Rust. Affected versions of this crate did not have a T: Sync bound in the Send impl for Cow<', T, U>. This allows users to create data races by making Cow contain types that are (Send && !Sync) like Cell<> or RefCell<_>. Such data races can lead to memory corruption. The flaw was corrected in commit d1c7658 by adding …
The atom crate contains a security issue revolving around its implementation of the Send trait. It incorrectly allows any arbitrary type to be sent across threads potentially leading to use-after-free issues through memory races.
An issue was discovered in the async-coap crate through 2020-12-08 for Rust. Affected versions of this crate implement Send/Sync for ArcGuard<RC, T> with no trait bounds on RC. This allows users to send RC: !Send to other threads and also allows users to concurrently access Rc: !Sync from multiple threads. This can result in memory corruption from data race or other undefined behavior caused by sending T: !Send to other …
The appendix crate implements a key-value mapping data structure called Index<K, V> that is stored on disk. The crate allows for any type to inhabit the generic K and V type parameters and implements Send and Sync for them unconditionally. Using a type that is not marked as Send or Sync with Index can allow it to be used across multiple threads leading to data races. Additionally using reference types …
An issue was discovered in the aovec crate through 2020-12-10 for Rust. Because Aovec does not have bounds on its Send trait or Sync trait, a data race and memory corruption can occur.
In the affected versions of this crate, Demuxer unconditionally implemented Send with no trait bounds on T. This allows sending a non-Send type T across thread boundaries, which can cause undefined behavior like unlocking a mutex from a thread that didn't lock the mutex, or memory corruption from data race. The flaw was corrected in commit 0562cbf by adding a T: Send bound to the Send impl for Demuxer.
Affected versions of this crate unconditionally implement Sync for SyncRef. This definition allows data races if &T is accessible through &SyncRef. SyncRef derives Clone and Debug, and the default implementations of those traits access &T by invoking T::clone() & T::fmt(). It is possible to create data races & undefined behavior by concurrently invoking SyncRef::clone() or SyncRef::fmt() from multiple threads with T: !Sync.
tiny_future contains a light-weight implementation of Futures. The Future type it has lacked bound on its Send and Sync traits. This allows for a bug where non-thread safe types such as Cell can be used in Futures and cause data races in concurrent programs. The flaw was corrected in commit c791919 by adding trait bounds to Future's Send and Sync.