-
@abhilash1910 is XPU support for qlora still working? I tried to run it on a linux arc770 system at home but got the following error:
$ python qlora.py --model_name_or_path facebook/opt-350m
Num pr…
-
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
/root/miniconda3/lib/python3.8/site-packages/torch/nn/init.py:405: UserWarning…
-
在 [qlora_dpo.py](https://github.com/lyogavin/Anima/blob/dc691b2958f50a6d73a239b0e13c341ce6b2d60f/rlhf/qlora_dpo.py)中,看到对chosen 进行 `max_length=self.source_max_len` 的tokenize,对rejected进行`max_length=self…
-
Hello,
Thank you for the amazing repo. I was curious about this code below.
https://github.com/artidoro/qlora/blob/e3817441ccc7cff04ca13e8fab5196d453ff32f2/qlora.py#L221-L222
Why is `lm_head…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
When I am trying to execute the script finetune_llama2_guanaco_7b.sh, I am getting error dataclasses.FrozenInstanceError: cannot assign to field generation_config.
The stack trace is below:
qlor…
-
**Describe the bug**
When trying to train large language models using QLora which loads the model in 4-bit, deepspeed initialization fail with this error:
File "/root/.pyenv/versions/3.10.11/env…
-
Hi, I tried finetuning falcon-40b with Qlora & compare its performance with llama-65b which is also finetuned with Qlora & both finetuned on same dataset oassist.
And I tried to compare its MMLU ev…
-
XTuner 已支持 百川2 QLora 单卡微调,欢迎加入 [Wechat](https://cdn.vansin.top/internlm/xtuner.jpg) 群交流
```python
git clone https://github.com/internLM/xtuner
cd xtuner
pip install -e .
xtuner train con…
-
Finetuning the mpt-7b and mpt-30b using qlora gives the error "ValueError: MPTForCausalLM does not support gradient checkpointing.". Is there a way to fix this?