zjunlp / DeepKE

[EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction
http://deepke.zjukg.cn/
MIT License
3.6k stars 694 forks source link

在使用deepke-llm时运行快速开始和预先练脚本出现的问题 #592

Closed ElectorShx closed 1 month ago

ElectorShx commented 1 month ago

Describe the bug

A clear and concise description of what the bug is. infer_scripts\llama_infer.bat : 无法加载模块“infer_scripts”。有关详细信息,请运行“Import-Module infer_scripts”。

  • CategoryInfo : ObjectNotFound: (infer_scripts\llama_infer.bat:String) [], CommandNotFoundException
  • FullyQualifiedErrorId : CouldNotAutoLoadModule

(deepke-llm) PS F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm> ft_scripts\llama.bat ft_scripts\llama.bat : 无法加载模块“ft_scripts”。有关详细信息,请运行“Import-Module ft_scripts”。 所在位置 行:1 字符: 1

(deepke-llm) PS F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm> cd InstructKGC (deepke-llm) PS F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC> infer_scripts\llama_infer.bat

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " 'NoneType' object has no attribute 'cadam32bit_grad_fp32' CUDA SETUP: Loading binary C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable 10/10/2024 19:50:58 - INFO - main - model_class:<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'> tokenizer_class:<class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>

10/10/2024 19:50:58 - INFO - model.loader - Add pad token: <|end_of_text|> 10/10/2024 19:50:58 - INFO - model.loader - Quantizing model to 4 bit. Traceback (most recent call last): File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 122, in main() File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 116, in main inference(model_args, data_args, training_args, finetuning_args, generating_args, inference_args) File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\inference.py", line 47, in inference model, tokenizer = load_model_and_tokenizer( File "F:\desktop\江苏省重点项目\software\deepke-llm\DeepKe\example\llm\InstructKGC\src\model\loader.py", line 121, in load_model_and_tokenizer model = model_class.from_pretrained( File "C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained return model_class.from_pretrained( File "C:\Users\AORUS-10900KF.conda\envs\deepke-llm\lib\site-packages\transformers\modeling_utils.py", line 2482, in from_pretrained raise ImportError( ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`

Environment (please complete the following information): window Python Version [3.9.20] 环境按照指导为deepke-llm

Screenshots

If applicable, add screenshots to help explain your problem.

image

Additional context

Add any other context about the problem here. 也按照Issue也发生了同样的问题

guihonghao commented 1 month ago

检查是否安装accelerate、bitsandbytes环境包

ElectorShx commented 1 month ago

已经安装完成了,问题是windows没有安装cuda工具包导致的问题