Open mops1112 opened 2 years ago
Same issue here. Any solutions?
I found a solution. If you run the save_model.py on a machine with CPU, the generated model fails to run on a video in GPU mode. It basically gives valid output only for the first frame. So, essentially you will have to reload the model every frame. If save_model.py is run on a GPU machine with CUDA devices made visible, it will run on both CPU and GPU without any issues. I have tested it in both Windows and Ubuntu and it works.
PuneethBC
How did you do?? I add os.environ["CUDA_VISIBLE_DEVICES"] = '0' in save_models but it doesn't work
Did not work for me. The second inference always invalid.
Is there a solution for this? I tried tensorflow 2.4 -> 2.9, no joy
Reading 'https://githubmemory.com/repo/google/automl/issues/896', there is a known issue with TF 2.2.0 that was fixed in 2.3.1
Worked for me: pip install tensorflow-gpu==2.3.1
Hi tf2.3.0 is working on CPU and GPU but it is slow to detect on yolov4-tiny-416. So, I move to use tf2.7 instead it works on CPU but GPU does not work. it detects only the first frame. the next frame does not detect. I feel it may be on save_model.py has a problem.
I load a model by this