adityac94 / conceptqa_vip

Official code to accompany the paper "Bootstrapping Variational Information Pursuit with Large Language and Vision Models for Interpretable Image Classification (ICLR 2024)."
3 stars 1 forks source link

Are pretrained models available? #1

Open manigalati opened 1 month ago

manigalati commented 1 month ago

I would like to know if pretrained Concept-QA and V-IP are available for use without the need to train them from scratch. If they are, could you provide information on where to download them?

Thank you!

adityac94 commented 1 month ago

Thank you for interest in our work! Here you go: https://drive.google.com/drive/folders/1oqSSnqWXSl7HOTCjYnhfinozuUX5Hld5?usp=share_link

Here is sample code to load the models in PyTorch, change dataset name and filenames accordingly:

dataset_name = "places365"
MAX_QUERIES= 2207
concept_net = train_vip.get_answering_model(dataset_name = dataset_name, MAX_QUERIES=MAX_QUERIES).cuda()

concept_net.load_state_dict(torch.load(f"saved_models/{dataset_name}_answers_clip_finetuned_depends_no_classifier_epoch_460.pth"))

actor, classifier = train_vip.get_vip_networks(dataset_name, mode="random", MAX_QUERIES=MAX_QUERIES)

actor.load_state_dict(torch.load(f"saved_models/model_actor_{dataset_name}_vip_biased_clip_finetuned_adaptive_0.0_611_0.4_cutoff_500_num_queries.pth"))
classifier.load_state_dict(torch.load(f"saved_models/model_classifier_{dataset_name}_vip_biased_clip_finetuned_adaptive_0.0_611_0.4_cutoff_500_num_queries.pth"))

Let me know if you have any questions