ystemsrx / Qwen2-Boundless

A fine-tuned model from Qwen2-1.5B-Instruct, capable of handling sensitive topics like violence, explicit content. / 从 Qwen2-1.5B-Instruct 微调,能处理各类敏感话题
https://huggingface.co/ystemsrx/Qwen2-Boundless
Apache License 2.0
140 stars 33 forks source link

这里好像有点问题 #8

Open marcury6 opened 2 months ago

marcury6 commented 2 months ago

transformers下载并更新到最新版本。把huggingface上面关于模型的文件都下载了,也通过 pip install --upgrade torch命令下载最新的Pytorch库。并访问英伟达toolkit仓库下载toolkit12.4.1版本,最后终端运行了

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

还是出现报错

报错信息

Traceback (most recent call last): File "C:\Users\Administrator\Desktop\Qwen2-Boundless-main\continuous_conversation.py", line 10, in model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\site-packages\transformers\modeling_utils.py", line 3318, in from_pretrained raise ImportError( ImportError: Using low_cpu_mem_usage=True or a device_map requires Accelerate: pip install accelerate BaiduShurufa_2024-9-8_21-57-10

marcury6 commented 2 months ago

看漏了报错信息,终端运行 pip install accelerate 得到解决

最后运行时有两条奇怪的提醒

Assistant: The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results. E:\python\Lib\site-packages\transformers\models\qwen2\modeling_qwen2.py:580: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.) attn_output = torch.nn.functional.scaled_dot_product_attention(

ystemsrx commented 2 months ago

这两个警告没有什么大影响,一个是Attention Mask 没有设置,还有一个是没有编译 Flash Attention,可以不用处理