While i am trying to convert HF model to GGUF in the quantization process I am facing this error
python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s
INFO:root:Compiling the code using CMake.
INFO:root:Loading model from directory models/Llama3-8B-1.58-100B-tokens.
INFO:root:Converting HF model to GGUF format...
ERROR:root:Error occurred while running command: Command '['C:\Users\nisam\anaconda3\envs\bitnet-cpp\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log
While i am trying to convert HF model to GGUF in the quantization process I am facing this error python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s INFO:root:Compiling the code using CMake. INFO:root:Loading model from directory models/Llama3-8B-1.58-100B-tokens. INFO:root:Converting HF model to GGUF format... ERROR:root:Error occurred while running command: Command '['C:\Users\nisam\anaconda3\envs\bitnet-cpp\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log