Closed songw-zju closed 2 years ago
Are you using the bash script in tools/eval_train.sh
?
With this script the cuda error should not happen since I used a GPU with only 8Gb. Regarding the performance also should be fixed by using the bash script since there the correct parameters (for example resolution, which checkpoint to use and so on) are defined as used on the paper.
You can run bash tools/eval_train.sh
and you should be able to reproduce the results.
You maybe should pull the repo since I have updated the documentation and the bash scripts since the last issue. :smile:
Thank you for your suggestion! I will retest with the updated code.
Ok! Let me know if the problem still occurs.
Hi, @nuneslu, the problem disappeared with the updated code. Thanks for your help and great work!
Thanks for your great work. It's so helpful for me. I encountered a problem(
cuda out of memory
) when running the codeinference_vis.py
for inference with a single RTX 2080TI(11GB). Then I tried to clear the gradient information during the inference phase in the following two ways:and
But the precision obtained in the end was terrible. Is there something wrong with my operation?