Open SiriusHsh opened 11 months ago
@SiriusHsh Thank you for raising this issue! Could you please check the following things; a. Using the latest TF version b. Try using different benchmark tools of other TFlite versions and c. Switching to GPU instead of CPU Please let us know if it helps? Thank you!
This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.
I am able to reproduce:
./benchmark_model --graph=onehot.tflite
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [onehot.tflite]
INFO: Loaded model onehot.tflite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Segmentation fault
Hi @alankelly, can you please take a look? Thanks.
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
source
TensorFlow version
tf 2.14.0
Custom code
Yes
OS platform and distribution
Ubuntu 18.04.6
Mobile device
No response
Python version
Python 3.8.3
Bazel version
bazel 5.3.0
GCC/compiler version
gcc 7.5.0
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
A maliciously constructed
onehot
operator model leads toop_context.output
being empty, causing a null pointer dereference in thePrepare
function.onehot.zip
Standalone code to reproduce the issue
Relevant log output
No response