amazon-science / omni-detr

PyTorch implementation of Omni-DETR for omni-supervised object detection: https://arxiv.org/abs/2203.16089
Other
64 stars 6 forks source link

CUDA out of memory after BURN_IN_STEP #10

Open becauseofAI opened 1 year ago

becauseofAI commented 1 year ago

The code can work in the step of BURN_IN_STEP with pixels 800 and gpu memory 32G. However, it occurs out of memory when semi-supervised learning with gpu memory 32G or 80G even reducing pixels to 600.

The CUDA out of memory information is as follows: RuntimeError: CUDA out of memory. Tried to allocate 506.00 MiB (GPU 1; 31.75 GiB total capacity; 27.74 GiB already allocated; 424.00 MiB free; 29.83 GiB reserved in total by PyTorch) RuntimeError: CUDA out of memory. Tried to allocate 1.97 GiB (GPU 0; 79.35 GiB total capacity; 56.13 GiB already allocated; 1.38 GiB free; 57.79 GiB reserved in total by PyTorch)

What's the problem? Any help will be appreciated. @zhaoweicai

wwwbq commented 1 year ago

I also have this problem,Have you solved it?

peiwang062 commented 1 year ago

Hi, any change made on the code? for example, batch size. The current code only supports batch size equal to 1.