Thank you for your outstanding contribution to LLaMA!
Colossal-AI provides optimized open source low-cost and high performance solutions for large models, such as replicating ChatGPT-like training process.
Recently, Alpaca shared an interesting model fine-tuned from the LLaMA 7B, and claimed that they have reached out to Meta to obtain guidance on releasing the Alpaca model weights.
We would appreciate it if we could know the detailed guidance or requirements to share fine-tuned LLaMA model weights to benefit the open-source community in a non-commercial way.
@binmakeswell you could just do it in the same way the Stanford team did it with Alpaca (as mentioned in the linked page above). You could even copy the exact same disclaimer and reasons.
Thank you for your outstanding contribution to LLaMA!
Colossal-AI provides optimized open source low-cost and high performance solutions for large models, such as replicating ChatGPT-like training process.
Recently, Alpaca shared an interesting model fine-tuned from the LLaMA 7B, and claimed that they have reached out to Meta to obtain guidance on releasing the Alpaca model weights.
We would appreciate it if we could know the detailed guidance or requirements to share fine-tuned LLaMA model weights to benefit the open-source community in a non-commercial way.
Thank you very much.