cambridgeltl / visual-med-alpaca

Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
https://cambridgeltl.github.io/visual-med-alpaca/
Apache License 2.0
370 stars 42 forks source link

Request for access to the model #6

Open PARSA-MHMDI opened 8 months ago

PARSA-MHMDI commented 8 months ago

Dear author, I have filled out the required form to gain access to the Visual Med Alpaca model. However, I have not received any communication or instructions to access the model. I need to access the model for my academic paper. Please give me access to this model.

Thanks.