tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
185.68k stars 74.19k forks source link

null pointer dereference in onehot #62166

Open SiriusHsh opened 11 months ago

SiriusHsh commented 11 months ago

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

tf 2.14.0

Custom code

Yes

OS platform and distribution

Ubuntu 18.04.6

Mobile device

No response

Python version

Python 3.8.3

Bazel version

bazel 5.3.0

GCC/compiler version

gcc 7.5.0

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

A maliciously constructed onehot operator model leads to op_context.output being empty, causing a null pointer dereference in the Prepare function.

// one_hot.cc
TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
  TF_LITE_ENSURE_EQ(context, NumInputs(node), 4);
  TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);

  OneHotContext op_context{context, node};
  switch (op_context.dtype) {
    // TODO(b/111744875): Support uint8 and quantization.
    case kTfLiteFloat32:
    case kTfLiteInt16:
    case kTfLiteInt32:
    case kTfLiteInt64:
    case kTfLiteInt8:
    case kTfLiteUInt8:
    case kTfLiteBool:
      op_context.output->type = op_context.dtype;  // op_context.output is nullptr

onehot.zip

Standalone code to reproduce the issue

I use the benchmark tool built according to [this official guide](https://www.tensorflow.org/lite/guide/build_cmake#step_1_install_cmake_tool), as follows:
1. git clone https://github.com/tensorflow/tensorflow.git tensorflow_src
2. mkdir tflite_build && cd tflite_build
3. cmake ../tensorflow_src/tensorflow/lite
4. cmake --build . -j
5. cmake --build . -j -t benchmark_model

The benchmark is in the tools directory

When I use the benchmark tool for PoC validation, it causes the TensorFlow Lite inference process to be subjected to a DOS(coredump).

❯ ./benchmark_model --graph=../poc/onehot.tflite
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [../poc/onehot.tflite]
INFO: Loaded model ../poc/onehot.tflite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[1]    3686 segmentation fault (core dumped)  ./benchmark_model --graph=../poc/onehot.tflite

Relevant log output

No response

sushreebarsa commented 10 months ago

@SiriusHsh Thank you for raising this issue! Could you please check the following things; a. Using the latest TF version b. Try using different benchmark tools of other TFlite versions and c. Switching to GPU instead of CPU Please let us know if it helps? Thank you!

github-actions[bot] commented 10 months ago

This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.

github-actions[bot] commented 10 months ago

This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.

google-ml-butler[bot] commented 10 months ago

Are you satisfied with the resolution of your issue? Yes No

pkgoogle commented 9 months ago

I am able to reproduce:

./benchmark_model --graph=onehot.tflite
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [onehot.tflite]
INFO: Loaded model onehot.tflite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Segmentation fault

Hi @alankelly, can you please take a look? Thanks.