cambridgeltl / visual-med-alpaca

Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
https://cambridgeltl.github.io/visual-med-alpaca/
Apache License 2.0
362 stars 41 forks source link

Is there more details about experiment results? #4

Open ifshine opened 1 year ago

ifshine commented 1 year ago

Dear author: Your job is good. But I wonder if you could list more detailed results(visual-med-alpaca, VQA Medical Model) on several related benchmarks(e.g., VQA-RAD, PathVQA)?