user0407 / CLUDA

Implementation of CLUDA: Contrastive learning in Unsupervised Domian Adaptation in Semantic Segmentation
23 stars 2 forks source link

CUDA out of memory Issue #4

Closed leoil closed 1 year ago

leoil commented 1 year ago

Hello, I followed the training script in your repo, but it won't run on my 3090 with 24GB memory.

I'm able to run original HRDA with the same config (1024x1024 corp_size), and the memory usage is around 24088/24268MB.

Could you please specify your GPU device used for the experiment, and its memory usage.

Also, are there any tweaks that can be made to fit the model into a single 3090?

user0407 commented 1 year ago

Hi,

Thanks for the interest in our work. I used a nvidia v100 32 gb gpu. In your case, Try using a smaller feature map size. Maybe 32x32. I don't know how this will affect the performance. But i think it will be fine.

Regards Midhun V

On Tue, Nov 22, 2022, 7:49 PM leoil @.***> wrote:

Hello, I followed the training script in your repo, but it won't run on my 3090 with 24GB memory.

I'm able to run original HRDA with the same config (1024x1024 corp_size), and the memory usage is around 24088/24268MB.

Could you please specify your GPU device used for the experiment, and its memory usage.

Also, are there any tweaks that can be made to fit the model into a single 3090?

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/4, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63CNMQLTVHWQF4YNXMLWJTI6LANCNFSM6AAAAAASH4AWKY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

yuheyuan commented 1 year ago

Hello, I followed the training script in your repo, but it won't run on my 3090 with 24GB memory.

I'm able to run original HRDA with the same config (1024x1024 corp_size), and the memory usage is around 24088/24268MB.

Could you please specify your GPU device used for the experiment, and its memory usage.

Also, are there any tweaks that can be made to fit the model into a single 3090?

have you success run the code by using 3090 GPU. I also have two 3090 GPU. I meet the same problems as you

leoil commented 1 year ago

have you success run the code by using 3090 GPU. I also have two 3090 GPU. I meet the same problems as you

Not at the moment. Training the model in distributed GPUs may require some additional work, but I'm not that familiar with it, please let me know if you have any ideas.

yuheyuan commented 1 year ago

Hello, I followed the training script in your repo, but it won't run on my 3090 with 24GB memory. I'm able to run original HRDA with the same config (1024x1024 corp_size), and the memory usage is around 24088/24268MB. Could you please specify your GPU device used for the experiment, and its memory usage. Also, are there any tweaks that can be made to fit the model into a single 3090?

have you success run the code by using 3090 GPU. I also have two 3090 GPU. I meet the same problems as you

yes, I know training model in distributed GPUs have more work. So I hope to use only one Gpu. author sad you "Try using a smaller feature map size. Maybe 32x32. ". So I want to know if you success use this way run the code by using a smaller feature map

leoil commented 1 year ago

Not yet. I'm yet to fully understand the structure and details of the related repos. :)