-
配置:
系统:CentOS Linux release 7.9.2009
CUDA版本: 12.4
python版本: 3.11
NVIDIA-SMI :550.54.14
我在使用hello_world推理时遇到了这个问题,已经提示重新编译了bitsandbytes,但还是出现了这个问题。
问题如下:
CUDA SETUP: Something unexpected happ…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
### System Info / 系統信息
python3.12,transformer 43;gpu 2080Ti22g*2
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [ ] My own modifi…
-
Could AutoGPTQ support the merge model from adapter and base model?
-
感谢您的工作。请问sun397数据集按照论文中的配置为什么lora和-lora1性能会很低呢,好像只有0.2左右。
-
As discussd in issue [https://github.com/unslothai/unsloth/issues/154#issue-2119969174](url) , I am also working with extended tokenizer to accomodate words of a new language. I've merged Llama 3.2 to…
-
Hello, thank you very much for your contribution. An error was encountered during training
`Traceback (most recent call last):
File "/home/zhangc/PixelLM/train_ds.py", line 974, in
main(sys.…
-
According to the correction in (https://github.com/lm-sys/FastChat/pull/1933), loading the peft model through python3 -m fastchat.serve.cli --model-path /data2/project/peft-model successfully
Howev…
-
The peft code snippet loads a `PeftConfig` parameter which is not used anywhere, I think it would be better to remove that since the entire script functions without it.
cc @BenjaminBossan if you have…
-
- OFT and BOFT
- IA3
- Adapters
- Soft Prompts
- P-tuning
- Prompt-Tuning
- Prefix-tuning
- Etc.
Context:
- https://www.mercity.ai/blog-post/fine-tuning-llms-using-peft-and-lora
- https://huggingface…