cambridgeltl / visual-med-alpaca

Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
https://cambridgeltl.github.io/visual-med-alpaca/
Apache License 2.0
372 stars 42 forks source link

Not very clear on the classifier #1

Open dashesy opened 1 year ago

dashesy commented 1 year ago

What classifier is used?