-
from https://huggingface.co/blog/4bit-transformers-bitsandbytes ?
-
不管是baichuhan13B的qlora训练脚本还是InternLM20B的qlora训练脚本运行时都是抛出这样的错误
-
对chatglm2进行lora微调时,提示CUDA error: invalid argument;使用的windows系统,python310,cuda:11.8
PS E:\Chatglm2-Qlora\chatGLM-6B-QLoRA-main> python train_qlora.py --train_args_json chatGLM_6B_QLoRA.json --model_na…
-
https://github.com/unslothai/unsloth
-
Sample:
https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/GPU/QLoRA-FineTuning/alpaca-qlora/finetune_llama2_7b_arc_2_card.sh
Env:
Intel(R) Xeon(R) w7-3455
2 ARC770
ubuntu22.…
-
It looks like EleutherAI/gpt-j-6b is not supported:
Env:
Running from docker:
```
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel
RUN apt-get update && apt-get install git -y
RUN pip …
-
Currently, `tune ls` is a bit unweildy. Can we make it better?
@joecummings
-
In [Unsloth](https://github.com/unslothai/unsloth/blob/main/unsloth/__init__.py#L53-L54):
```python
# Reduce VRAM usage by reducing fragmentation
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable…
-
When I am trying to execute the script finetune_llama2_guanaco_7b.sh, I am getting error dataclasses.FrozenInstanceError: cannot assign to field generation_config.
The stack trace is below:
qlor…
-
In the given examples axoltol [exmaples/medusa](https://github.com/ctlllll/axolotl/tree/main/examples/medusa),
I follow the `vicuna_7b_qlora_stage1.yml` and `vicuna_7b_qlora_stage2.yml` to write my …