I will be adding LoRA layers in SAM image encoder like mentioned in the paper and fine tune it on few shot custom medical dataset. The problem is I have only 8GB available GPU memory from the university.
Would this memory space enough to perform few shot finetuning. If not, is there any way I can do LoRA fine tuning requiring less than 8GB memory e.g reducing batch_size etc
Thanks a lot for you suggestion. I just have few more things to ask:
Would decreasing LoRA rank from 4 gives me chance to work within required memory?
Do you think that fine tuning mask decoder only (image and prompt encoder frozen) would be possible in my hardware settings as SAM mentioned that their mask decoder is very light weight.
Since the rank num 4 is quite low, I do not think lower memory could save considerable computation cost.
It would be possible to finetune the mask decoder only. However, the expressiveness of the light-weight decoder is limited, therefore the performance may degrade. You can try your proposals.
Thanks for the great work!
I will be adding LoRA layers in SAM image encoder like mentioned in the paper and fine tune it on few shot custom medical dataset. The problem is I have only 8GB available GPU memory from the university.
Would this memory space enough to perform few shot finetuning. If not, is there any way I can do LoRA fine tuning requiring less than 8GB memory e.g reducing batch_size etc
Your suggestions will be appreciated.
Regards, Muhammad