-
## 🐛 Bug
When running Phi-3.5-mini-instruct and Qwen2.5-7B-Instruct with NeMo + ThunderFX we get error:
> 0: File "/usr/lib/python3.10/copy.py", line 153, in deepcopy
0: y = copier(memo…
-
### 🐛 Describe the bug
I'm trying to finetune Llama2-7B (to reproduce the experiments in a paper) using PEFT LoRA (0.124% of trainable params). However, this results in an out-of-memory (OOM) error o…
-
https://discord.com/channels/1097462770674438174/1100717863221870643/1113669041240944722
![image](https://github.com/h2oai/h2ogpt/assets/2249614/1f381d92-6c17-47a8-9bf7-2f90beed2798)
-
When i try to run the start script i get this error message
```
(h2ogpt) C:\Users\domin\Documents\aiGen\h2ogpt>python generate.py
Fontconfig error: Cannot load default config file: No such file: …
-
Thank you great work!
I have a few questiones about peft? I hope you can answer that. Thank you a lot!
1. Which model fine-tuning is best to use? Is it a pre-trained model (llama2-7b) or a supervise…
zsxzs updated
1 month ago
-
During training of glora i am facing the error "The size of tensor a (8388608) must match the size of tensor b (4096) at non-singleton dimension 0"
I am using glors from PR https://github.com/Arnav…
-
你好,我按照脚本里默认的超参数(learning rate),以及论文提到各参数配置、偏好数据,在ALMA-7B-Lora上做CPO,但是训出来的模型输出大量重复前文甚至不翻译的情况,如下图(zh->en,raw_res 是没用utils里的clean函数的结果),请问是哪里没设好超参吗?谢谢你。
![image](https://github.com/user-attachments/asse…
-
现在放出的微调接口是Lora微调的,是否可以在训练资源充足的情况下进行全参微调呢?是不是注释掉
`peft_lora.py`中的`model = get_peft_model(model, peft_config)`,就可以了?
-
When using this w/flow (Win 10, 8Gb VRAM RTX2070, 64Gb RAM)
![workflow2](https://github.com/Stability-AI/StableCascade/assets/114806454/caee4e14-3042-41d1-933c-ecba17dc969d)
Error occurred when …
-
Hi there. I have a question.
Is PEFT supported for blenderBot, BlenderBot2, or BlenderBot3? I need to apply LoRA or QLoRA to Blenderbot.
Your response or any advice on this issue is deeply apprecia…