openlm-research / open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
Apache License 2.0
7.27k stars 370 forks source link

FastChat support for Open_llama_3b_v2 inference - help sought #100

Open RDouglasSharp opened 3 months ago

RDouglasSharp commented 3 months ago

I use FastChat as the framework for both training and dialog-based inference, and FastChat supports Meta/Llama. I was excited to try the 3B state Open-Llama model, and the FastChat finetuning scripts all work perfectly with open_llama_3b_v2. Oddly, the FastChat inference framework does not work with my finetuned model, or with the original model. Has anyone figured out how to get FastChat fastchat.serve.cli to support openlm-research models?