-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/axolotl-ai-cloud/axolotl/labels/bug) didn't find any similar reports.
### Exp…
-
-
(qloravenv) C:\deepdream-test\llm_qlora-main>python train.py C:\deepdream-test\llm_qlora-main\configs\mistralaiMistral-7B-v0.1.yaml
Load base model
Traceback (most recent call last):
File "C:\dee…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
[2024-07-12 02:22:28,334] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda…
-
## Description
### Regression Test for Loss, Memory,
Throughput
Comparisons on loss, memory and throughput for Full-FT, PEFT
- QLoRA: status quo on the switch of `torch_dtype=float16` (Referenc…
-
### 🚀 The feature, motivation and pitch
Currently FSDP is rejecting tensor parameters with dtype unit8. is_floating_point() only allows one of the (torch.float64, torch.float32, torch.float16, and …
-
I've been trying to make the combination `deepspeed + qlora + falcon` work but due to unknown reasons I've stuck in an error maze.
## Setup
- Docker image: `winglian/axolotl-runpod:main-py3.9-cu…
-
Hi Team,
I have successfully finetuned a QLoRA adapter on a custom dataset. When I try to load it in full precision, it gets loaded and works well
But this takes too much time and GPU memory to …
-
When I tried
```
!python qlora.py –learning_rate 0.0001 --model_name_or_path EleutherAI/gpt-neox-20b --trust_remote_code
```
in colab, i got following errors
```
2023-06-03 13:54:17.113623: W t…
-
如题,支持QLora吗?