Open xorsuyash opened 2 months ago
cc @GautamR-Samagra cc @ChakshuGautam
on further improvement on the issue.....
Finetuned two models mixtral-base and mixtral-instruct on raft data format.
performed comparision with RAG+gpt3.5, RAG+finetune_base ,RAG+finetune instruct.
Finetuned mixtral base and instruct performs comparable to gpt3.5 in some metrics like answer similarity and outperforms in some metrics like answer relevancy.
Instruct finetuned answers shows better control over base finetuned.
References and possible approaches for buiding RAG2.0
cc @TakshPanchal cc @GautamR-Samagra cc @ChakshuGautam
cc @GautamR-Samagra cc @ChakshuGautam
Description
RAG2.0 refers to fine tuning and optmizing end-to-end LLMs as well as retriever for better RAG
Dataset
Dataset requires question answer and the context from which question is answered called oracle_context , distractors(chunks randomly sampled from the context ) , oracle_context and distractors are then randomly interleaved inside and question is appended at the end , documents + question constitutes instruction set for the LLM.
parameters of dataset
Fine-tuning overview
Mixtral 7B was used for fine tuning as the base model and LoRA with quantization of 4bit is used a fine tuning technique. Initially data containing only question answer is used for fine tuning mixtral7B for around 2000 epochs which showed significant decrease in the training loss and eval-loss. further model is again fine-tuned on data containing context+question+answer for around 200 epochs.![Screenshot from 2024-03-30 23-19-14](https://github.com/Samagra-Development/ai-tools/assets/98162846/849b7769-9d17-41a2-8b99-30c87d899d8b)
Performance comparison
Inference on finetuned model ans base model was done using 250 samples randomly sampled from the test set and inference is then quantitavely evaluated using metrics of RAGAS library and samagra llm_evaluator. metrics include
Future Plans for Improvement:
The initial data used p_value=1.0 , on further iterations different p_values may result in better fine-tuned model and also lower p_value reduces the over fitting in model. Chain of thoughts answers will also be used instead of normal answers for fine tuning which can lead to better fine tuned models. and comparision among :
References
raft_research_paper:https://github.com/ShishirPatil/gorilla/blob/gh-pages/assets/RAFT.pdf