microsoft / BitNet

Official inference framework for 1-bit LLMs
MIT License
2.61k stars 181 forks source link

Converting HF model to GGUF format #28

Closed NISAMLC closed 3 hours ago

NISAMLC commented 3 hours ago

While i am trying to convert HF model to GGUF in the quantization process I am facing this error python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s INFO:root:Compiling the code using CMake. INFO:root:Loading model from directory models/Llama3-8B-1.58-100B-tokens. INFO:root:Converting HF model to GGUF format... ERROR:root:Error occurred while running command: Command '['C:\Users\nisam\anaconda3\envs\bitnet-cpp\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log