PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
MIT License
174
stars
11
forks
source link
About the pre-trained models of VQA-Rad and Slake #13
Thanks for you marvelous work! Could you please consider releasing the pre-trained models for VQA-Rad and Slake? I encountered an issue in './src/MedVInT_TD/test_VQA_RAD.py' where I couldn't find the released model './Results/QA_no_pretrain_no_aug/VQA_RAD/checkpoint-16128' as specified in line 21. Your assistance with this would be greatly appreciated.
Or could you please share the hyper-parameters that was used to fine-tune the pretrained checkpoint to achieve SOTA results on VQA RAD and SLAKE reported in the paper?
Thanks and best regards!
Thanks for you marvelous work! Could you please consider releasing the pre-trained models for VQA-Rad and Slake? I encountered an issue in './src/MedVInT_TD/test_VQA_RAD.py' where I couldn't find the released model './Results/QA_no_pretrain_no_aug/VQA_RAD/checkpoint-16128' as specified in line 21. Your assistance with this would be greatly appreciated.