meta-llama / llama

Inference code for Llama models
Other
55.85k stars 9.51k forks source link

Guidance on releasing the fine-tuned LLaMA model weights #226

Open binmakeswell opened 1 year ago

binmakeswell commented 1 year ago

Thank you for your outstanding contribution to LLaMA!

Colossal-AI provides optimized open source low-cost and high performance solutions for large models, such as replicating ChatGPT-like training process.

Recently, Alpaca shared an interesting model fine-tuned from the LLaMA 7B, and claimed that they have reached out to Meta to obtain guidance on releasing the Alpaca model weights.

We would appreciate it if we could know the detailed guidance or requirements to share fine-tuned LLaMA model weights to benefit the open-source community in a non-commercial way.

Thank you very much.

alexconstant9108 commented 1 year ago

@binmakeswell you could just do it in the same way the Stanford team did it with Alpaca (as mentioned in the linked page above). You could even copy the exact same disclaimer and reasons.