PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
MIT License
164
stars
13
forks
source link
The checkpoint of Blank MedVInT model is unable to load. #10
Thanks for your excellent work!
I wanted to replicate the results of your paper on the VQA-RAD dataset, and when I tried to run train_downstream.py, I want to load VQA_lora_PMC_LLaMA_PMCCLIP/blank/checkpoint-1382/pytorch_model.bin. However, when the LLaMa model parameters were loaded, an error occurred that the parameter key-value pairs did not match.
I downloaded PMC - LLAMA model from https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B, and set up corresponding loading paths. What's the solution?
Thanks for your excellent work! I wanted to replicate the results of your paper on the VQA-RAD dataset, and when I tried to run train_downstream.py, I want to load VQA_lora_PMC_LLaMA_PMCCLIP/blank/checkpoint-1382/pytorch_model.bin. However, when the LLaMa model parameters were loaded, an error occurred that the parameter key-value pairs did not match. I downloaded PMC - LLAMA model from https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B, and set up corresponding loading paths. What's the solution?
With best wishes