Asad-Ismail / Grounding-Dino-FineTuning

Fine tuning grounding Dino
Apache License 2.0
63 stars 8 forks source link

Grounding DINO Fine-tuning 🦖

We have expanded on the original DINO repository https://github.com/IDEA-Research/GroundingDINO by introducing the capability to train the model with image-to-text grounding. This capability is essential in applications where textual descriptions must align with regions of an image. For instance, when the model is given a caption "a cat on the sofa," it should be able to localize both the "cat" and the "sofa" in the image.

Features:

Installation:

See original Repo for installation of required dependencies essentially we need to install prerequisits

pip install -r reqirements.txt

then install the this package

pip install -e .

Optional: You might need to do this if you have an old gpu or if its arch is not recognized automatically


pip uninstall groundingdino
nvidia-smi --query-gpu=gpu_name,compute_cap --format=csv
export TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0;7.5;8.0;8.6" (add your gpu arch given from previous)
export FORCE_CUDA=1
pip install -e .

Dataset

Dataset is a subset of fashion dataset availbale in hugging face with categories bag, shirt and pant e.t.c. A random subset of 200 images are selected for training containing three categoreis, also random 50 images containing these three categories are choosen for test images, you can get the sample dataset from here GoogleDrive and put it inside multimodal-data to use data as it is.

Train:

  1. Prepare your dataset with images and associated textual captions. A tiny dataset is given multimodal-data to demonstrate the expected data format.
  2. Run the train.py for training.
    python train.py

Test:

Visualize results of training on test images

python test.py

Qualitative Results

For Input text "shirt. pants. bag" and input validation images (see above like for train and valiadtion data. The model was only trained on 200 images and tested on 50 images)

Before Fine-tuning: Model performs as shown on left below. GT is shown in green and model predictions are shown in red. Interesting to note is that for this dataset model does not perform very bad, but the concept of some categories is different e.g "shirt" is different then the GT see second and third image.

After Fine-tuning: Model correctly detects all required categories image one along with the correct concept.

Contributing

Feel free to open issues, suggest improvements, or submit pull requests. If you found this repository useful, consider giving it a star to make it more visible to others!

TO DO:

  1. Add model evaluation
  2. Add LORA for finetuning so model can retrain its original open vocabulary capaciry,
  3. We did not added auxilary losses as mentioned in the original paper, as we feel we are just finetuning an already trained model but feel free to add auxillary losses and compare results