The official implementation of Learning Domain-Aware Detection Head with Prompt Tuning
(arxiv).
This codebase is based on RegionCLIP.
Put your dataset at './datasets/your_dataset'. Please follow the format of Pascal Voc. For example:
Put your pre-trained VLM model at somewhere you like, for example, './ckpt', and edit the MODEL.WEIGHTS in train_da_pro_c2f.sh.
Following RegionCLIP, generate class embedding and put it at somewhere you like, and edit the MODEL.CLIP.TEXT_EMB_PATH.
Training: train_da_pro_c2f.sh Testing: test_da_pro_c2f.sh Training is customizable. You can directly use the parameters of other VLMs as backbone and then adjust only domain-adaptive prompt. You can also follow the steps of Regionclip to customize a backbone on your own dataset, then conduct adaptation.
A training sample: 1) Initial pre-trained model with VLM (like CLIP or RegionCLIP). 2) Set LEARNABLE_PROMPT.TUNING to False to fine-tune the pre-trained backbone with domain adversarial loss. 3) Set LEARNABLE_PROMPT.TUNING to True to freeze the backbone and tune a learnable domain-adaptive prompt on two domains.