-
Hi team getting the following error while enabling 4-bit and LORA
```
File "/root/miniconda3/envs/open/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 262, in __init__
self._c…
-
### 🚀 The feature, motivation and pitch
Support models like GptNeo
### Alternatives
_No response_
### Additional context
_No response_
-
### 🚀 Feature
New advancements bringing quantized LoRA and FSDP together.
https://github.com/AnswerDotAI/fsdp_qlora
### Motivation
Train larger models on consumer GPUs or older generation Da…
-
Hey,
I am fairly new to fine-tuning my own models and working with HuggingFace. Yesterday I finished fine-tuning a Llama 2 model with my custom dataset, but I couldn't figure out how to properly pu…
-
currently only original LORA is supported as not fused adapter, I hope to be able to add the support for QLORA/QA-LORA support for the adapters, without fusing with the base model.
-
Hi there! `keras-nlp` supports Lora, e.g. from https://ai.google.dev/gemma/docs/lora_tuning
```python
...
gemma_lm.backbone.enable_lora(rank=4)
...
```
Just wondering are there any plans to …
-
I see that [PEFT brought in](https://github.com/huggingface/peft/releases/tag/v0.10.0) QLoRA with FSDP support in their latest release.
Any plans to incorporate this into litgpt?
-
### System Info
```Shell
See below a pip list output that does not work:
Package Version
------------------------ ---------------
accelerate 0.30.0
aiohttp …
-
训练qwen1.5-14b-chat,遇到下面的报错,transformers==4.38.2
RuntimeError(
"Unsloth: Tokenizer's pad_token cannot be = eos_token, and we couldn't find a\n"\
"replacement of eit…
-
Currently, we don't apply QLoRA to either the output projection or token embeddings. There's no great reason to not apply quantization to output projections, we simply don't do this due to limitations…