ollama / ollama-python

Ollama Python library
https://ollama.com
MIT License
4.08k stars 342 forks source link

Clone the HuggingFace repository (optional) llm/llama.cpp/convert.py can't find this file. #96

Open tigerzhanglaihu opened 6 months ago

tigerzhanglaihu commented 6 months ago

I want to use model glmchat-6b , read doc :

Clone the HuggingFace repository (optional) If the model is currently hosted in a HuggingFace repository, first clone that repository to download the raw model.

Install Git LFS, verify it's installed, and then clone the model's repository:

git lfs install git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 model Convert the model Note: some model architectures require using specific convert scripts. For example, Qwen models require running convert-hf-to-gguf.py instead of convert.py

python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin Quantize the model llm/llama.cpp/quantize converted.bin quantized.bin q4_0

But , I can't find convert.py this file.