Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
What does this PR do?
Updated fine-tuning readme to Meta Llama 3, replaced "Llama 2" with "Meta Llama 3", "7B" to "8B" in fine-tuning related docs.