Closed JINO-ROHIT closed 11 months ago
Yes, we support localized llm :)
For example, You can use the llama2 model (renaming ./multiagents/agent_conf/config_diag_llama.yaml to ./multiagents/agent_conf/config.yaml). https://github.com/TsinghuaDatabaseGroup/DB-GPT/tree/main/diagllama#-quickstart
That is amazing! Is there also a sample of data used for finetuning the llm and how it was done? i was not able to find it anywhere Thanks so much
The finetuning data is still under preparation and not publicly available.
We will release it when the quality is good enough.
thanks ! i hope you guys release it sooner :)
can i also know how the finetuning was done? Is the process described somewhere?
During our diagnosis using GPT-4, we actually decompose the complex diagnosis task into tens of simpler sub-tasks. We thus collect responses of GPT-4 to those sub-tasks, and fine-tune local LLMs with them by supervised learning. We'll try to optimize our fine-tuning procedure in the future.
nice, and this is instruction finetuned based supervised learning correct?
also how can i contribute to the project?
We will be thrilled if you can contribute to the project together!
Of course the first step is to get the project to successfully run on your computer. Next, if you find any problem or missing functions, you can inform us or directly submit a GitHub pr. We are open to contributions in both academic research and real case applications.
We have released the training data :) https://github.com/TsinghuaDatabaseGroup/DB-GPT/tree/main/diagllama/training_data
Does this work only with gpt4? Is there support for local models like mistral?
please help
thanks