LiheYoung / UniMatch

[CVPR 2023] Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation
https://arxiv.org/abs/2208.09910
MIT License
476 stars 59 forks source link

Memory requirements for training #91

Closed DeepHM closed 1 year ago

DeepHM commented 1 year ago

Thank you for your wonderful research.

I have a few things that I don't understand. In memory requirements during learning, the results I checked are as follows (two gpus) :

Here, I don't understand. Most other recent SOTA studies also use the input resolution of 512 or 513. Most other SOTA studies do not require much GPU memory to train (At 513 resolution and Resnet50, mostly below 15000 Mib). In my opinion, your research approach doesn't require much computing cost. As you can see from the memory results above, I don't understand why so many resources are needed. Can you give me an answer to this part?

Thank you in advance. Good luck.

LiheYoung commented 1 year ago

The 321 resolution takes around totally 15G GPU memory in my machine.

EduardoLawson1 commented 8 months ago

you mean 15GB for each GPU or a single one? I have a NVIDIA TITAN X and would like to know if its possible to train on the pascal voc.