Closed YongLD closed 3 months ago
The system support running local LLMs with Langchain hugging face pipelines
We didn't test it in the paper, we definitely intend to expand the paper and provide more experiments
How to use local LLMs?
You should change in the relevant config file:
llm:
type: 'HuggingFacePipeline'
name: <The name of the model>
max_new_tokens: <max tokens>
If you want also that the optimizer will be local (highly not recommended), then you should change the meta prompts folder to:
meta_prompts:
folder: 'prompts/meta_prompts_completion'
Thanks for the reply, I will try to see if it works.
Thanks for the reply, I will try to see if it works.
Hi, do you have any updates regarding the performance using open-source LLMs?
Can this project use the source LLM? Such as Xcomposer or LLama? Have you test these LLMs in paper?