hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.24k stars 1.24k forks source link

Aborted - convert_tflite.py #166

Closed Teresito closed 4 years ago

Teresito commented 4 years ago

Hi,

I am currently trying to convert yolov4-tiny-416 .tf model into a .tflite via convert_tflite.py. However, encountered a thread abort issue when doing so.

Here is the relevant dump.

` (tensor) pi@raspberrypi:~/tensorflow-yolov4-tflite $ python convert_tflite.py --weights ./checkpoints/yolov4-tiny-416 --output ./checkpoints/yolov4-tiny-416.tflite --quantize_mode float16

2020-07-24 13:45:28.947973: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support) 2020-07-24 13:45:28.948431: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2020-07-24 13:45:29.533579: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: graph_to_optimize 2020-07-24 13:45:29.533676: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 672 nodes (570), 1058 edges (956), time = 92.4790039ms. 2020-07-24 13:45:29.534008: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 1.40700006ms. 2020-07-24 13:45:36.530128: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support) 2020-07-24 13:45:36.530610: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2020-07-24 13:45:39.305134: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: graph_to_optimize 2020-07-24 13:45:39.305240: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] constant_folding: Graph size after: 516 nodes (-156), 785 edges (-273), time = 1602.70508ms. 2020-07-24 13:45:39.305507: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] constant_folding: Graph size after: 516 nodes (0), 785 edges (0), time = 329.491028ms. I0724 13:46:56.436326 3070204624 lite.py:509] Using experimental converter: If you encountered a problem please file a bug. You can opt-out by setting experimental_new_converter=False I0724 13:47:11.007589 3070204624 convert_tflite.py:48] model saved to: ./checkpoints/yolov4-tiny-416.tflite I0724 13:47:11.084644 3070204624 convert_tflite.py:53] tflite model loaded [{'name': 'input_1', 'index': 0, 'shape': array([ 1, 416, 416, 3]), 'shape_signature': array([ 1, 416, 416, 3]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}] [{'name': 'Identity', 'index': 201, 'shape': array([], dtype=int32), 'shape_signature': array([], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}] Fatal Python error: Aborted

Current thread 0xb6ff9ad0 (most recent call first): File "/home/pi/.virtualenvs/tensor/lib/python3.7/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 113 in Invoke File "/home/pi/.virtualenvs/tensor/lib/python3.7/site-packages/tensorflow/lite/python/interpreter.py", line 511 in invoke File "convert_tflite.py", line 65 in demo File "convert_tflite.py", line 72 in main File "/home/pi/.virtualenvs/tensor/lib/python3.7/site-packages/absl/app.py", line 250 in _run_main File "/home/pi/.virtualenvs/tensor/lib/python3.7/site-packages/absl/app.py", line 299 in run File "convert_tflite.py", line 76 in Aborted `

Relevant information : TensorFlow==2.2.0 OpenCV==4.4.0

System : Raspbian Buster Raspberry Pi 4GB Model B

Side note : I am only able to use TensorFlow 2.2.0 due to limited ARM support.

My future steps :

Any help is appreciated. Many thanks.

raryanpur commented 4 years ago

Same issue here.

Teresito commented 4 years ago

Managed to solve this issue by doing the proper commands of converting the .weights properly and orderly. The thread abort issue seems to have originated from file missing or a corrupted conversion previously.

I've forked this repository and if you are interested to have a look, feel free. I will most likely be tailoring the fork for the Raspberry Pi 4.

Thank you to the developer of this repo if your reading this 👍 😄

raryanpur commented 4 years ago

@Teresito nice, me too. Have you been able to get the int8 quantized model conversion to work? That's still broken for me.

Julius-ZCJ commented 4 years ago

Same issue for me too, Have you fix this bug? @Teresito

Teresito commented 4 years ago

@Julius-ZCJ Hi, what terminal commands have you tried? I managed to resolve it by doing a clean/fresh conversion (via deleting checkpoints folder generated previously).

@raryanpur . Unfortunately I haven't tried to get the int8 quantized model version to work. I'll let you know if i do, however it wont be soon. Good luck.

nanshenwei commented 4 years ago

@Teresito @raryanpur I'm studying, and I want to get the int8 quantized, which is important to me. Because I want to deploy the model to 'k210'. If you have a solution, please let me know, thanks so much!!

KuoEuran commented 3 years ago

@nanshenwei Did you set the model to k210? I have get the int 8 quantized model, but I can't deploy the model to k210.

nanshenwei commented 3 years ago

I didn't. I gave up and I haven't engaged in related work for some time, sorry > @nanshenwei Did you set the model to k210? I have get the int 8 quantized model, but I can't deploy the model to k210.

KuoEuran commented 3 years ago

Okay thank you for replying

nanshenwei @.***> 於 2021年6月25日 週五 下午4:37 寫道:

I didn't. I gave up and I haven't engaged in related work for some time, sorry > @nanshenwei https://github.com/nanshenwei Did you set the model to k210? I have get the int 8 quantized model, but I can't deploy the model to k210.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/hunglc007/tensorflow-yolov4-tflite/issues/166#issuecomment-868325444, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQ3XA2WKYQR7BUGM64K3D6TTUQ53DANCNFSM4PGJGBUA .