Open hrmgxuni opened 8 months ago
I had this on my local windows system and it was due to having the cpu version of pytorch, where a dependency of Accelerate requires the gpu version.
`import torch
print(torch.version)`
this should give something like 2.2.1+cu118 , if its says +cpu then you have the cpu version.
@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?
python 3.10 transformer 4.38.2 bitsandbytes 0.42.0 accelerate 0.27.2 torch 2.0.1+cu117
torch.cuda.is_available() works, nvidia-smi works, import accelerate and bitsandbytes works, but 8-bit quantization doesn't works with same error
Did you manage to solve the issue?
@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?
I have the same error.
Had the same error following this tutorial: https://huggingface.co/docs/peft/main/en/developer_guides/quantization on a Kaggle P100 GPU.
torch.cuda.is_available() returns True as well.
@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?
I have the same error.
When running my code through jupyter notebook, this error appeared. I used my local vscode and run the code, the error disappeared. Hope it helpful for u. Maybe it is not suitable for interactive notebook.
Hey @younesbelkada,
I'm not sure what to make of this issue. Seems to me based on the error log that if anything it's more related to Transformers? Wdyt?
I had this on my local windows system and it was due to having the cpu version of pytorch, where a dependency of Accelerate requires the gpu version.
`import torch
print(torch.version)`
this should give something like 2.2.1+cu118 , if its says +cpu then you have the cpu version.
This worked for me! Thank you :)
fwiw, downgrading to a lower version of transformers helped resolve the issue for me (4.38.2 to 4.31.0).
get the same on my side on premise cluster with RTX
fwiw, downgrading to a lower version of transformers helped resolve the issue for me (4.38.2 to 4.31.0).
This worked, Thanks !!
@DaveChini我有同样的问题。但是当我打印它时,它只类似于+cu。 2.1.2+cu121 但遇到同样的错误有帮助吗?
我有同样的错误。
当通过jupyter笔记本运行我的代码时,出现了这个错误。我使用本地 vscode 并运行代码,错误消失了。希望对你有帮助。也许它不适合交互式笔记本。
@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?
I have the same error.
When running my code through jupyter notebook, this error appeared. I used my local vscode and run the code, the error disappeared. Hope it helpful for u. Maybe it is not suitable for interactive notebook.
Yeah, me too. Same reason.
If you just installed the libraries such as pip install accelerate peft bitsandbytes transformers trl
and running the Jupyter, you can try restart the kernel.
clone the repo on colab notebook enabling tpu and use ngrok to run, worked for me, still not working on local host , but works on the link provided by the ngrok. I think it is a problem of cpu.
RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
I am running from terminal.
I am working with following versions:
print("Torch version:", torch.version) Torch version: 1.13.1+cu116 print("TorchVision version:", torchvision.version) TorchVision version: 0.14.1+cu116 print("Torchaudio version:", torchaudio.version) Torchaudio version: 0.13.1+cu116 print("Bitsandbytes version:", bnb.version) Bitsandbytes version: 0.43.3
transformers.version '4.44.0'
System Info
Reproduction
ImportError: Using
bitsandbytes
8-bit quantization requires Accelerate:pip install accelerate
and the latest version of bitsandbytes:pip install -i https://pypi.org/simple/ bitsandbytes
When I use auto train on Hugging Face and select the options as shown in the screenshot, I encounter an error. I noticed that most models trigger this error. Could it be related to the fact that I'm using a free CPU?[](url)
Expected behavior
I hope the pros can help me figure out how to fix this error.