google-ai-edge / LiteRT

LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
https://ai.google.dev/edge/litert
Apache License 2.0
167 stars 13 forks source link

FPE in DepthwiseConv2D #120

Open gaikwadrahul8 opened 1 day ago

gaikwadrahul8 commented 1 day ago

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

tf 2.14.0

Custom code

Yes

OS platform and distribution

Ubuntu 18.04.6

Mobile device

No response

Python version

Python 3.8.3

Bazel version

bazel 5.3.0

GCC/compiler version

gcc 7.5.0

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

Construct a malicious model for the DepthwiseConv2D operator, setting the stride_width to greater than 0xffff. When initializing the DepthwiseParams structure in depthwise_conv.cc, due to precision loss, op_params.stride_width becomes 0. TfLiteDepthwiseConvParams::stride_width is a 4-byte int type, while DepthwiseParams::stride_width is a 2-byte int type.

// depthwise_conv.cc
template <KernelType kernel_type>
TfLiteStatus EvalFloat(TfLiteContext* context, TfLiteNode* node,
                       TfLiteDepthwiseConvParams* params, OpData* data,
                       const TfLiteTensor* input, const TfLiteTensor* filter,
                       const TfLiteTensor* bias, TfLiteTensor* output) {
  float output_activation_min, output_activation_max;
  CalculateActivationRange(params->activation, &output_activation_min,
                           &output_activation_max);

  DepthwiseParams op_params;
  op_params.padding_type = PaddingType::kSame;
  op_params.padding_values.width = data->padding.width;
  op_params.padding_values.height = data->padding.height;
  op_params.stride_width = params->stride_width;    <== here
  op_params.stride_height = params->stride_height;
  op_params.dilation_width_factor = params->dilation_width_factor;
  op_params.dilation_height_factor = params->dilation_height_factor;
  op_params.float_activation_min = output_activation_min;
  op_params.float_activation_max = output_activation_max;
  TF_LITE_ENSURE_STATUS(ComputeDepthMultiplier(context, input, filter,
                                               &op_params.depth_multiplier));

The stride variable may be equal to 0, leading to a division by zero error.

// depthwiseconv_float.h
inline void FloatDepthwiseConvAccumRowGeneric(
    int stride, int dilation_factor, int input_depth, int input_width,
    const float* input_data, int pad_width, int depth_multiplier,
    int filter_width, const float* filter_data, int out_x_buffer_start,
    int out_x_buffer_end, int output_depth, float* acc_buffer) {
  ruy::profiler::ScopeLabel label("DepthwiseConvAccumRowGeneric (slow)");
  const float* filter_base_ptr = filter_data;
  for (int filter_x = 0; filter_x < filter_width; ++filter_x) {
    const int out_x_loop_start = std::max(
        out_x_buffer_start,
        (pad_width - dilation_factor * filter_x + stride - 1) / stride);   // FPE

DepthwiseConv2D_FPE.zip

Standalone code to reproduce the issue

When I use the benchmark tool for PoC validation, it causes the TensorFlow Lite inference process to be subjected to a DOS(coredump).

❯ ./benchmark_model --graph=../poc/DepthwiseConv2D_FPE.tflite
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [../poc/DepthwiseConv2D_FPE.tflite]
INFO: Loaded model ../poc/DepthwiseConv2D_FPE.tflite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: The input model file size (MB): 0.000772
INFO: Initialized session in 14.64ms.
INFO: Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
[1]    9351 floating point exception (core dumped)  ./benchmark_model --graph=../poc/DepthwiseConv2D_FPE.tflite

Relevant log output

No response

gaikwadrahul8 commented 14 hours ago

This issue originally reported by @SiriusHsh has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.