Closed DJayalath closed 2 years ago
Yes, DAFormer just fits into the 11 GB memory of the RTX2080 Ti. To save GPU memory, you could try to run our method with the simplified SegFormer decoder. You can find more details on the configuration here: experiments.py#L327. This corresponds to row 8 of Table 5 in the paper.
It might also be possible to save GPU memory by using pre-computed ImageNet features for the FD over here: mmseg/models/uda/dacs.py#L159. However, I don't know if that would be sufficient to fit the original DAFormer into an RTX 3080.
Thanks for the swift response. I have tried the simplified SegFormer decoder with the configuration you have highlighted. Unfortunately, this was still not sufficient.
Further possibilities to reduce memory consumption are:
Even though these changes will most probably reduce the mIoU (as shown in the ablations in the paper), they might be a starting point to get DAFormer running your GPU. Further, switching to a less demanding window manager (e.g. xfce) can sometimes give the few extra MB of GPU memory that are missing.
Many thanks for your suggestions. I am (just) able to fit DAFormer in 10GiB of GPU memory by disabling FD and reducing the crop size from 512x512 to 480x480. I have not experimented with using a smaller encoder yet, but this may not be necessary.
Hi, thanks for your excellent work.
I see that your experiments were run with an RTX 2080 Ti (11GB?). I am having the following error with RTX 3080 (10GB), I wonder if this is expected or not. And if you have any tips on reducing the GPU memory usage?
Full terminal output: