microsoft / BitNet

Official inference framework for 1-bit LLMs
MIT License
2.34k stars 160 forks source link

returned non-zero exit status 1 #25

Open GamerLegion opened 5 hours ago

GamerLegion commented 5 hours ago

My os is Windows , when I manually download the model and run with local path :

huggingface-cli download HF1BitLLM/Llama3-8B-1.58-100B-tokens --local-dir models/Llama3-8B-1.58-100B-tokens

python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s

Powershell shows me :

ERROR:root:Error occurred while running command: Command '['D:\developTools\Anaconda\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 1., check details in logs\convert_to_f32_gguf.log

(base) PS D:\AI\bitnet\BitNet>

I open convert_to_f32_gguf.log , It shows:

Traceback (most recent call last):

File "D:\AI\bitnet\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 20, in import torch ModuleNotFoundError: No module named 'torch'

GamerLegion commented 5 hours ago

My laptop CPU: AMD Ryzen 9 5900HX OS :Windows 11 develop tools :Visual Studio Community 2022 17.11.5

Dead-Bytes commented 2 hours ago

i had the same error gguf.GGMLQuantizationType.TL1 here TL1 is not getting imoported does GGMLQuantizationType have it?>

NISAMLC commented 1 hour ago

Facing the same issue INFO:root:Converting HF model to GGUF format... ERROR:root:Error occurred while running command: Command '['C:\Users\user\anaconda3\envs\bitnet-cpp\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log

log file is not showing any error and when i run convert-hf-to-gguf-bitnet.py file alone it executed without any error but still it makes problem while in the quantization step

Dead-Bytes commented 1 hour ago

i commented out the lines its working good now, i guess the gguf.GGMLQuantizationType does not have TL1/2 rn

NISAMLC commented 1 hour ago

i think its there TL1/2, ['BF16', 'F16', 'F32', 'F64', 'I16', 'I32', 'I64', 'I8', 'IQ1_M', 'IQ1_S', 'IQ2_S', 'IQ2_XS', 'IQ2_XXS', 'IQ3_S', 'IQ3_XXS', 'IQ4_NL', 'IQ4_XS', 'Q2_K', 'Q3_K', 'Q4_0', 'Q4_0_4_4', 'Q4_0_4_8', 'Q4_0_8_8', 'Q4_1', 'Q4_K', 'Q5_0', 'Q5_1', 'Q5_K', 'Q6_K', 'Q8_0', 'Q8_1', 'Q8_K', 'TL1', 'TL2', 'TQ1_0', 'TQ2_0']