PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
MIT License
172
stars
11
forks
source link
About the pretrained model used in the building process of PMC-VQA #6
Thank you for your marvelous work! And would you open more information about the building process of PMC_VQA dataset in the future, such as the LLaMA-7B trained with text data and finetuned with the manually annotated 2k question-answer pairs? Thak you very much and any reply would be appreciated.
Thank you for your marvelous work! And would you open more information about the building process of PMC_VQA dataset in the future, such as the LLaMA-7B trained with text data and finetuned with the manually annotated 2k question-answer pairs? Thak you very much and any reply would be appreciated.