Open IamYourAlpha opened 1 year ago
ConvMAE base detector with 32 batch size is trained by 8 A100 (80G). The GPU memory is approximately 40-50GB/GPU. You are recommend to use the updated ConvMAE Det with the following github link: https://github.com/OpenGVLab/Official-ConvMAE-Det
Thanks for the github link.
Will it be possible to share the compiled version of the custom detectron2 (as changes were made in this repository)?
Our ConvMAE implementation do not modify detectron2 lib as shown below. https://github.com/OpenGVLab/Official-ConvMAE-Det/blob/main/projects/ConvMAEDet/modeling/convmae.py Official detectron2 lib should support ConvMAE.
I see. It seems the source version of detectron2 is way ahead of the compiled version available.
Please compile an environment support official ViTDet first.
I have tried training the convmae detector (as provided in this repository) with 2 GPUs with each 32GB (V-100). It looks like I can carry out training with only batch size = 2. Going beyond batch-size 2 raises CUDA out of memory. Also with such small batch size training does not seem to produce any well-trained model. Could you tell me the recommended memory size for training the model with batch size = 32?
Thank you so much.