InternLM / xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
https://xtuner.readthedocs.io/zh-cn/latest/
Apache License 2.0
3.72k stars 302 forks source link

llava-surgery.py for phi3_mini to get gguf #659

Open rezacopol opened 4 months ago

rezacopol commented 4 months ago

I followed the instruction and got to ./iter_39620_hf and ./iter_39620_llava. I tried to convert them to gguf using the instrution here but got into issues that I've seen here

as I see the gguf files are released on HF, is there a plan to put the script you used on github?

Thanks.

pppppM commented 4 months ago

Converting phi3 to gguf can be tricky. It requires firstly transforming the llm part into llama format. The gguf on the HF hub is our temporary hard code. We are currently organizing this section of the code, aiming to simplify the process of converting phi3 to gguf.