mbzuai-oryx / groundingLMM

[CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.
https://grounding-anything.com
740 stars 37 forks source link

Easiest way to fine-tune on custom data? #27

Closed joshmyersdean closed 6 months ago

joshmyersdean commented 7 months ago

Hello! Thank you for this great work! Is there a preferred way to fine-tune this model on custom data? I am specifically interested in fine-tuning for open-vocabulary segmentation and referring segmentation.

Thank you!

hanoonaR commented 6 months ago

Hi @joshmyersdean,

Thank you for your interest in our work! I'm glad to hear you're considering fine-tuning our model for open-vocabulary and referring segmentation tasks. Given that GLaMM has been pre-trained across a variety of data types, including object detection, region-level captions, object segmentation, and referring expression segmentation, it's well-suited for your applications.

To proceed, ensure your dataset is formatted according to the specifications for the relevant dataset type (For example, refer: this for open-vocabulary segmentation and this for referring segmentation. If you need specific guidance on preparing your data or have any other questions, please feel free to reach out. We're here to help.

Apologies for the delayed response, and we'll make sure to be more prompt moving forward.

joshmyersdean commented 6 months ago

Thank you so much! Do you happen to have a script for fine-tuning or recommendations on which layers to freeze/train?

hanoonaR commented 6 months ago

Hi @joshmyersdean,

Thank you for reaching back! I'll try to provide some further details on fine-tuning our model for your specific needs.

We offer two scripts for training: train.py and train_ft.py.

train.py is ideal when dealing with a mix of dataset types, such as a combination of region/bbox data, segmentation data, and captioning data. This script manages the diverse forward passes by sampling data accordingly, facilitating a balanced training regime across different types.

On the other hand, train_ft.py is tailored for focusing on a single data type at a time. It doesn't set steps_per_epoch explicitly but estimates the steps based on the dataset's length, ensuring the model iterates through the entire dataset effectively.

For fine-tuning on "open-vocabulary segmentation and referring segmentation tasks", train_ft.py would be your go-to script since the datasets are primarily segmentation-oriented - sharing the same forward pass.

Regarding layer management during fine-tuning:

We generally freeze the global image encoder (CLIP), the grounding image encoder (SAM encoder), and the LLM during pre-training and fine-tuning. The trainable layers include the Region-encoder, Vision-Language (V-L) projection layers, LoRA LLM layers, the Language-to-Prompt (L-P) projection layer, and the mask decoder.

You can explore several configurations for fine-tuning:

1) Freeze everything except for the V-L projection layer and LLM LoRA layers. This approach focuses on adapting the core interaction between vision and language components to your specific segmentation tasks. 2) Train only the L-P projection layer and the mask decoder. This method is particularly useful for refining the model's output and improving segmentation performance directly. 3) Train both the V-L and L-P projection layers. This strategy allows for adjustments in both the initial processing of visual-language information and the final generation of segmentations.

Feel free to experiment with these configurations to find the best fit for your tasks. If you have further questions or need assistance in setting up your fine-tuning process, don't hesitate to reach out. Thank you!

Best Regards, Hanoona.

joshmyersdean commented 6 months ago

This is extremely helpful! Thank you so much for taking the time.