Open Kent0n-Li opened 1 year ago
In addition, we provide model weights.
Hi!May I ask, how much computing power do you need to fine-tune this model? How much configuration can be required to inference?Thanks! I have one V100 ,can I fine-tune my-own model?
Hey guys! Would be nice if somebody would collect such models in a single place. I think a simple .md file at this repository would be enough. I really like the possibilities that Alpaca gives and I want to know what happens around it.
Congradulations on this awsome project! Does healthcare professionals from individual specialties review the quality of the answer generated? And how would a medic help contribute to the project?
Hey guys! Would be nice if somebody would collect such models in a single place. I think a simple .md file at this repository would be enough. I really like the possibilities that Alpaca gives and I want to know what happens around it.
would love that, if you make an Alpaca "awesome-list" repo, we'd be happy to link it on our README
Sure
Sent from my iPhone
On Mar 25, 2023, at 23:20, Yann Dubois @.***> wrote:
Hey guys! Would be nice if somebody would collect such models in a single place. I think a simple .md file at this repository would be enough. I really like the possibilities that Alpaca gives and I want to know what happens around it.
would love that, if you make an Alpaca "awesome-list" repo, we'd be happy to link it on our README
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.
We have released "ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge". Please feel free to try our code and training data. https://github.com/Kent0n-Li/ChatDoctor