hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.24k stars 1.24k forks source link

save and load model on GPU #419

Open mops1112 opened 2 years ago

mops1112 commented 2 years ago

Hi tf2.3.0 is working on CPU and GPU but it is slow to detect on yolov4-tiny-416. So, I move to use tf2.7 instead it works on CPU but GPU does not work. it detects only the first frame. the next frame does not detect. I feel it may be on save_model.py has a problem.

I load a model by this

yolo4_weight = '.\checkpoints\yolov4-tiny-416'
saved_model_loaded =  tf.saved_model.load(yolo4_weight, tags=[tag_constants.SERVING])
infer = saved_model_loaded.signatures['serving_default']
PuneethBC commented 2 years ago

Same issue here. Any solutions?

PuneethBC commented 2 years ago

I found a solution. If you run the save_model.py on a machine with CPU, the generated model fails to run on a video in GPU mode. It basically gives valid output only for the first frame. So, essentially you will have to reload the model every frame. If save_model.py is run on a GPU machine with CUDA devices made visible, it will run on both CPU and GPU without any issues. I have tested it in both Windows and Ubuntu and it works.

larry3425527 commented 2 years ago

PuneethBC

How did you do?? I add os.environ["CUDA_VISIBLE_DEVICES"] = '0' in save_models but it doesn't work

lifgren commented 2 years ago

Did not work for me. The second inference always invalid.

Is there a solution for this? I tried tensorflow 2.4 -> 2.9, no joy

lifgren commented 2 years ago

Reading 'https://githubmemory.com/repo/google/automl/issues/896', there is a known issue with TF 2.2.0 that was fixed in 2.3.1

Worked for me: pip install tensorflow-gpu==2.3.1