-
Hi,
I have a question regarding the huggingface model weights.
I was trying to load some your adapters and play with them but I found that the adapters were very large (~4GB) as in the screenshot be…
00ber updated
4 weeks ago
-
When I fine-tune llama2-7B with LoRa, the following error occurs:
Traceback (most recent call last):
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 290, in
fire.Fire(train)
F…
-
I can't find a solution to this:
python generate.py --load_8bit --base_model 'decapoda-research/llama-7b-hf' --lora_weights 'tloen/alpaca-lora-7b'
===================================BUG REPORT==…
-
Thanks for sharing your work on quasi-Givens Orthogonal Fine Tuning! I'm excited to try it out but couldn't find instructions on how to use the code. Could you please provide some guidance on:
1. I…
-
Hello,
the fine-tuning process was done successfully, however when I try to run separate the inference by loading the code"
```
import torch
from transformers import AutoModelForCausalLM, Bits…
-
**Description:**
Add MPT with Gradient Checkpointing and LoRa support into OpenThaiGPT pertaining code. We will use MPT with Lora for continue pertaining to task #179
**To Do:**
1. MPT Weight + MP…
-
(migrated from https://forum.image.sc/t/error-with-gui-fine-tuning/105350)
After
```
(base) alobo2@alobo-ws:/media/alobo2/SP PHD U3/Islandia/Alteration/microsam/finetuning$ mamba activate sam
(sam…
-
Hi! Thank you for sharing your great repository!
I've encountered some errors because of the CUDA version difference and no longer available huggingface's alpaca-lora. I will share my environment a…
-
I'm running into this when attempting to run the docker install. It specifically mentions Triton not found, but I know the Dockerfile includes Triton. Seems like maybe there is a version conflict som…
-
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### W…