bytedance / SPTSv2

The official implementation of SPTS v2: Single-Point Text Spotting
Apache License 2.0
119 stars 16 forks source link

CUDA out of memory #2

Closed 1037419569 closed 1 year ago

1037419569 commented 1 year ago

单卡 10g 为啥会不够 predict一次

1037419569 commented 1 year ago

RuntimeError: CUDA out of memory. Tried to allocate 76.00 MiB (GPU 0; 10.76 GiB total capacity; 9.57 GiB already allocated; 74.81 MiB free; 9.70 GiB reserved in total by PyTorch)

Zerohertz commented 1 year ago

Did you run predict.py? When I inferring model, main.py --eval has no problem with GPU memory, but when I run predict.py, the memory issue was corrupted like you.

ChulyoungKwak commented 1 year ago

I also suffer from the same problem, but it works after adding @torch.no_grad() above the def main(args) in predict.py (line 123) But I'm not sure if this solution is correct or not.

1037419569 commented 1 year ago

yes, run predict.py!

zhangjx123 commented 1 year ago

Thanks to ChulyoungKwak's answer, because I usually work in A100 GPU, so I missed this problem. If the GPU memory is not enough, @torch.no_grad() is a good solution.