tatsu-lab / stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.
https://crfm.stanford.edu/2023/03/13/alpaca.html
Apache License 2.0
29.5k stars 4.05k forks source link

ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model #122

Open Kent0n-Li opened 1 year ago

Kent0n-Li commented 1 year ago

We have released "ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge". Please feel free to try our code and training data. https://github.com/Kent0n-Li/ChatDoctor

Kent0n-Li commented 1 year ago

In addition, we provide model weights.

atregret commented 1 year ago

Hi!May I ask, how much computing power do you need to fine-tune this model? How much configuration can be required to inference?Thanks! I have one V100 ,can I fine-tune my-own model?

finom commented 1 year ago

Hey guys! Would be nice if somebody would collect such models in a single place. I think a simple .md file at this repository would be enough. I really like the possibilities that Alpaca gives and I want to know what happens around it.

Dimarond commented 1 year ago

Congradulations on this awsome project! Does healthcare professionals from individual specialties review the quality of the answer generated? And how would a medic help contribute to the project?

YannDubs commented 1 year ago

Hey guys! Would be nice if somebody would collect such models in a single place. I think a simple .md file at this repository would be enough. I really like the possibilities that Alpaca gives and I want to know what happens around it.

would love that, if you make an Alpaca "awesome-list" repo, we'd be happy to link it on our README

Kent0n-Li commented 1 year ago

Sure

Sent from my iPhone

On Mar 25, 2023, at 23:20, Yann Dubois @.***> wrote:

 Hey guys! Would be nice if somebody would collect such models in a single place. I think a simple .md file at this repository would be enough. I really like the possibilities that Alpaca gives and I want to know what happens around it.

would love that, if you make an Alpaca "awesome-list" repo, we'd be happy to link it on our README

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.