tianrun-chen / SAM-Adapter-PyTorch

Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
MIT License
850 stars 75 forks source link

when I use the model I trained , the cuda will out of memory, but if I use the pretained model it`s fine #61

Open shhjjj opened 7 months ago

shhjjj commented 7 months ago

Hi , I have some problem when I try to use the model I trained. The first question is that , I use the config of vit-h and my GPU is rtx A6000 48g, when I train the model with the vit_h pretrained model , there are no error. But after trained ,when I want to use the model I trained to run test.py , the cuda will out of memory. I aready try batch_size=1 but the error still happen. The another question is when I try to use 2 gpu to train or test the model whit the .pth I saved , the first GPU will run all local_rank process , and this condition will make the cuda out of memory.

lixhere commented 6 months ago

Hello, I have also encountered the first situation you mentioned. May I ask if you have resolved it?

WenDongyp commented 4 months ago

Hello, I have also encountered the first situation you mentioned. May I ask if you have resolved it?

Hello, I also met the same situation, may I ask you to solve it

Divine0719 commented 3 months ago

@shhjjj @lixhere @WenDongyp @tianrun-chen in test.py try this code

    **with torch.no_grad():**
        pred = torch.sigmoid(model.infer(inp))