-
**Describe the bug**
When graphing the data from LoRa batch on MongoDB, we sometimes get weird noise or spikes in the data that do not appear on the SD card.
For example here I graphed the same exa…
-
I installed PEFT from source.
And use the latest versions of Transformers and TRL.
I passed the XLoRA model to TRL but the training doesn't seem to work (training loss doesn't decrease and validatio…
-
Hi
Just discovered your project while looking on how to interface my SIL (Services Industriels de Lausanne) electrical counter with Home-Assistant. It's Landis E450 S5 with Mbus port with power sup…
-
Hi all,
I recently updated my version of Forge which I'd been avoiding due to previously breaking installations.
I was on: f0.0.17v1.8.0rc-latest-276-g29be1da7
and have updated to: f2.0.1v1.…
-
- [x] I am running the latest version of this repository (dependencies too)
- [x] I checked the documentation and found no answer
- [x] I checked to make sure that this issue has not already been cr…
-
Error CODE 1 :
```
[2024-09-18 00:12:07] [INFO] 2024-09-18 00:12:07 WARNING cache_latents_to_disk is train_util.py:3936
[2024-09-18 00:12:07] [INFO] enabled, so cache_latents is
[2024-…
-
### Describe the issue
Issue: Lora finetuning with Zero2.json and also Zero3.json. During finetuning, train and validation loss are reducing but when I see the weights in saved model checkpoint, it h…
-
Really appreciate for the youtube video for finetune with Mac M1,
And I can run the finetune successfully on my Mac M1.
```
python scripts/lora.py --model mlx-community/Mistral-7B-Instruct-v0.2-4b…
-
Hello!
First of all, I'd like to thank you for making the program.
I am using it very well and it is great; the speed is fast, and the control net is built-in.
However, I am having a few issues a…
-
It might be an upstream issue.
I'm using Forge with default settings, except for the resolution. However, I’ve tried most of the samplers and schedulers to fix the problem, but without success. Wha…