Closed Latrolage closed 9 months ago
If your computing resources are limited, you can use DAT-light. For training, the minimum requirement: 4 $\times$ 8.9 GB (i.e., 4 GPUs, each has over 8.9 GB memory) For testing, the minimum requirement: 5 GB (one GPU)
PS: When testing, setting use_chop: True
(like at here) can also effectively reduce memory cost.
I'm getting:
RuntimeError: CUDA out of memory. Tried to allocate 2.11 GiB (GPU 0; 7.79 GiB total capacity; 5.08 GiB already allocated; 1.51 GiB free; 5.09 GiB reserved in total by PyTorch)
could you put minimum requirements on the readme?