bin C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so
C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit_grad_fp32'
CUDA SETUP: Loading binary C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
10/10/2024 19:50:58 - INFO - main - model_class:<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>
tokenizer_class:<class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>
10/10/2024 19:50:58 - INFO - model.loader - Add pad token: <|end_of_text|>
10/10/2024 19:50:58 - INFO - model.loader - Quantizing model to 4 bit.
Traceback (most recent call last):
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 122, in
main()
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 116, in main
inference(model_args, data_args, training_args, finetuning_args, generating_args, inference_args)
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 47, in inference
model, tokenizer = load_model_and_tokenizer(
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\model\loader.py", line 121, in load_model_and_tokenizer
model = model_class.from_pretrained(
File "C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\transformers\modeling_utils.py", line 2482, in from_pretrained
raise ImportError(
ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`
Environment (please complete the following information):
window
Python Version [3.9.20]
环境按照指导为deepke-llm
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
也按照Issue也发生了同样的问题
Describe the bug
(deepke-llm) PS F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm> ft_scripts\llama.bat ft_scripts\llama.bat : 无法加载模块“ft_scripts”。有关详细信息,请运行“Import-Module ft_scripts”。 所在位置 行:1 字符: 1
(deepke-llm) PS F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm> cd InstructKGC (deepke-llm) PS F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC> infer_scripts\llama_infer.bat
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " 'NoneType' object has no attribute 'cadam32bit_grad_fp32' CUDA SETUP: Loading binary C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable 10/10/2024 19:50:58 - INFO - main - model_class:<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'> tokenizer_class:<class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>
10/10/2024 19:50:58 - INFO - model.loader - Add pad token: <|end_of_text|> 10/10/2024 19:50:58 - INFO - model.loader - Quantizing model to 4 bit. Traceback (most recent call last): File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 122, in
main()
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 116, in main
inference(model_args, data_args, training_args, finetuning_args, generating_args, inference_args)
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 47, in inference
model, tokenizer = load_model_and_tokenizer(
File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\model\loader.py", line 121, in load_model_and_tokenizer
model = model_class.from_pretrained(
File "C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\transformers\modeling_utils.py", line 2482, in from_pretrained
raise ImportError(
ImportError: Using
load_in_8bit=True
requires Accelerate:pip install accelerate
and the latest version of bitsandbytespip install -i https://test.pypi.org/simple/ bitsandbytes
or pip install bitsandbytes`Environment (please complete the following information): window Python Version [3.9.20] 环境按照指导为deepke-llm
Screenshots
Additional context