Open DEC1MU5 opened 10 months ago
Lol seems like you forgot to remove --medvram argument
I've found that I sometimes have to restart my PC to get textual inversions to train faster. After a while (especially if I'm doing several in a row) it starts to slow down and needs to be reset.
Got same problem here, did you manage to find out what it was?
Same problem, same settings. 4070 Ti, 5 vectors, 15 pics, Batchsize 8 and Gradient step 2.
3h ~8%, that can't be right...
[edit] I went through every tutorial step again and recognized that in Settings -> Training the following was not checked:
After checking and saving this time it was applied and my ETA went down to 35-40mins.
Is there an existing issue for this?
What happened?
I created a new Automatic1111 install tonight with the latest version with no extensions or anything else I can think of to mess up my trainings. Updated Torch, Xformers, and PIP.
It's telling me things like it will take DAYS to train where it used to take 40 minutes to 4 hours.
My Specs are Win 10, AMD Ryzen 5900, Nvidia 3060ti 8GB, 64 GB System Ram.
My command line args are:
set COMMANDLINE_ARGS= --xformers --disable-nan-check --medvram
The TI I created is FILENAMEphoto of a woman with brunette hair
Number of vectors per token 8
any my training parameters are
New TI Created of a woman "FILENAME"embedding learning rate: 0.05:10, 0.02:20, 0.01:60, 0.005:200, 0.002:500, 0.001:3000, 0.0005hypernetwork learning rate 0.00001
gradient clipping disabled
batch size 8 gradient accumulation steps 3 (My processed dataset dir has 24 512x512 images 8x3=23)log directory "textual_inversion"prompt template is "Custom_Subject_filewords"
The template I use contains only this:a photo of a [name], [filewords]
width/height 512SD model loaded is V-15-pruned
do not resizes images unticked
max steps 3000
save image and save a embedding copy both set to 50
Use PNG alpha channel as loss weight UNTICKED
Save images with embedding in PNG chunks TICKED
Read parameters (prompt, etc...) from txt2img tab when making previews UNTICKED
Shuffle tags by ',' when creating prompts. TICKED
Drop out tags when creating prompts. 0.1
Choose latent sampling method (Deterministic)
IT takes like 10 minutes to do one step and the timer just gets longer and longer until it gets into the hundreds of hours.
looking like this.
Preparing dataset...
100%|██████████████████████████████████████████████████████████████████████████████████| 48/48 [00:02<00:00, 17.51it/s]
Training textual inversion [Epoch 1: 1/1] loss: 0.0945899: 0%| | 1/3000 [01:45<87:48:03, 105.40s/it]
It's driving me mad, what happened? It used to be fine.
if you need any more info I'll do my best to help you help me.
Thanks.
Steps to reproduce the problem
The TI I created is FILENAMEphoto of a woman with brunette hair
Number of vectors per token 8
any my training parameters are
New TI Created of a woman "FILENAME"embedding learning rate: 0.05:10, 0.02:20, 0.01:60, 0.005:200, 0.002:500, 0.001:3000, 0.0005hypernetwork learning rate 0.00001
gradient clipping disabled
batch size 8 gradient accumulation steps 3 (My processed dataset dir has 24 512x512 images 8x3=23)log directory "textual_inversion"prompt template is "Custom_Subject_filewords"
The template I use contains only this:a photo of a [name], [filewords]
width/height 512SD model loaded is V-15-pruned
do not resizes images unticked
max steps 3000
save image and save a embedding copy both set to 50
Use PNG alpha channel as loss weight UNTICKED
Save images with embedding in PNG chunks TICKED
Read parameters (prompt, etc...) from txt2img tab when making previews UNTICKED
Shuffle tags by ',' when creating prompts. TICKED
Drop out tags when creating prompts. 0.1
Choose latent sampling method (Deterministic)
What should have happened?
training usually takes me 40 minutes to 8 hours depending on my dataset. Its now taking DAYS. about 10 minutes per step.
Version or Commit where the problem happens
version: v1.5.1 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.32.0 • checkpoint: e1441589a6
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
Cross attention optimization
xformers
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
none. New install just for TI training.
Console logs
Additional information
It just takes unbelievably long, when it used to take a normal amount of time for my Specs.