google-ai-edge / LiteRT

LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
https://ai.google.dev/edge/litert
Apache License 2.0
170 stars 14 forks source link

null pointer dereference in reduce_prod #108

Open gaikwadrahul8 opened 5 days ago

gaikwadrahul8 commented 5 days ago

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

tf 2.14.0

Custom code

Yes

OS platform and distribution

Ubuntu 18.04.6

Mobile device

No response

Python version

Python 3.8.3

Bazel version

bazel 5.3.0

GCC/compiler version

gcc 7.5.0

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

A maliciously constructed reduce_prod operator model leads to op_context.axis being empty, causing a null pointer dereference in the PrepareSimple function.

// reduce.cc
TfLiteStatus PrepareSimple(TfLiteContext* context, TfLiteNode* node) {
  TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);
  TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);

  OpContext op_context(context, node);
  TF_LITE_ENSURE_TYPES_EQ(context, op_context.axis->type, kTfLiteInt32);   // op_context.axis is nullptr

reduce_prod.zip

Standalone code to reproduce the issue

I use the benchmark tool built according to [this official guide](https://www.tensorflow.org/lite/guide/build_cmake#step_1_install_cmake_tool), as follows:
1. git clone https://github.com/tensorflow/tensorflow.git tensorflow_src
2. mkdir tflite_build && cd tflite_build
3. cmake ../tensorflow_src/tensorflow/lite
4. cmake --build . -j
5. cmake --build . -j -t benchmark_model

The benchmark is in the tools directory

When I use the benchmark tool for PoC validation, it causes the TensorFlow Lite inference process to be subjected to a DOS(coredump).

❯ ./benchmark_model --graph=../poc/reduce_prod.tflite
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [../poc/reduce_prod.tflite]
INFO: Loaded model ../poc/reduce_prod.tflite
ERROR: Invalid tensor index 10 in inputs. The subgraph has 3 tensors

ERROR: Invalid tensor index 24 in outputs. The subgraph has 3 tensors

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[1]    9785 segmentation fault (core dumped)  ./benchmark_model --graph=../poc/reduce_prod.tflite

Relevant log output

No response

gaikwadrahul8 commented 4 days ago

This issue originally reported by @SiriusHsh has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.