Open VladAndronik opened 1 year ago
are you using the latest colab ?
I have the excact same bug... Just tried 5 minutes ago
confirmed broken, many undefined vars
I'll move the old method to a new colab
confirmed broken, many undefined vars
which vars exactly ?
every variable passed to train_dreambooth.py
regardless prior preservation
only one var blocked it but I can't find which one
now trying with an older rev 4155e9ccd7f322f70be5314f68222ca6b3f65343, i'll tell in a few minutes if it works
confirmed 4155e9ccd7f322f70be5314f68222ca6b3f65343 as working, so it broke between this and the most recent commits. anyway splitting the "old method" into a separate notebook makes totally sense.
where can I find a notebook that works?
where can I find a notebook that works?
If you want to use the old method : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/Dreambooth/fast_DreamBooth-Old-Method.ipynb
new method (better one) : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
where can I find a notebook that works?
If you want to use the old method : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/Dreambooth/fast_DreamBooth-Old-Method.ipynb
new method (better one) : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
ok so in the NEW METHOD page, there are no CLASS images upload section....??? just asking..... i have no problem using the OLD model, since i had GREAT results using PRIOR preservation and Captionned_instance_images. SO there is no need to use CLASS pictures on the NEW method?
If you set "contains_faces" to male, female or both, it will use a method of prior preservation, applied only on the text encoder which yields better results in my experience. But you need to correctly rename your input images as the example photo shows.
If you set "contains_faces" to male, female or both, it will use a method of prior preservation, applied only on the text encoder which yields better results in my experience. But you need to correctly rename your input images as the example photo shows.
ok i tried the NEW method and the OLDER, BOTH with all as instructed, the results on the NEW one are a MORPHED version of me and my friend.... on the older COLLAB, i did the SAME training but using OLDER TRAINING, and the results were completely accurate...... the MORPHED version needs some PROMPT HELP to get to my actual face, but not so accurate as the older method, in BOTH i used prior preservation and captioned instances.,.... the morphed version looks NICE but its not always me, on the old version its ALWAYS ME, so please, never remove the old method page, i appreciate it.. and many others willl!!!
What did you name the intance pictures in the new method ?
one of them is 4ZEEEddie_man_08.jpg they work flawless on the OLD model and NEW, but on the NEW it does NOT looks like me, it has characteristic of me, like glasses and beard, but NOT me..... same as my friend...... 71ZEEEstefania_woman_04.jpg works great
You didn't follow the most important rule of the new method, never use known name or class, you used eddie and man, so you turned the new method into a bad version of the old method.
I wrote it in giant font so that no one would miss it :
The most importent step is to rename the instance picture to the same instance unique identifier for each subject
those instructions were for the old method, the new method's instruction were written in the new method cell.
those instructions were for the old method, the new method's instruction were written in the new method cell.
ok dont worry about that, seems im mixing METHODS, sorry im fine, i now have a question about the Enable_text_encoder_training, is it necessary for CAPTIONED INSTANCES or has nothing do to with that? What is it for?
Keep it between 10-20% if you want an easy style transfer, if you want quick results at lower steps, push it to 100%
you skipped the dependencies cell
the second cell is the dependencies cell
the second cell is the dependencies cell
ok, it got fixed somehow, but i am running it now withthe Enable_text_encoder_training activatad at 500 steps.......
@LIQUIDMIND111 did you end up getting better results after doing the new method correctly?
@LIQUIDMIND111 did you end up getting better results after doing the new method correctly?
not really, i get fat and ugly and NOT my actual face..... on the OLD model, i get perfect results, BUt i noticed that my instance names has NUMBERS, and when training, the instance names that are shown at the training moment, are missing THOSE numbers, what i believe is that you CANNOT use numbers with letters as INSTANCE NAMES.... not sure why, but got BETTER results whe NOT using numbers on the instance names like VREWGVEG(1).jpg compared to 345GFGFD43(1).jpg
Is the old collab gone? =( Any idea where I can find it?
Try with the new one, if you're not getting good results, I'll walk you through it.
Try with the new one, if you're not getting good results, I'll walk you through it.
thats really sad you removed the old one, since we were getting BETTER results than the NEW one.... ouch. that hurts
@LIQUIDMIND111 did you end up getting better results after doing the new method correctly?
NOT BETTER... the old method looked better, but in the new it ONLY worked SOMEWHAT if i use text encoder at 100%/////
@TheLastBen I can't seem to get as good results as the old version.
set "contains_faces" to "No", set the textencoder to 15% the total steps, and rename the images to one unique non-existing word, and you will get almost perfect results
@TheLastBen, I've tested on man, woman, dogs, and art styles for over 20+ models, and there's just too much trial and error to get something decent. The old method with class and reg images seems to be a lot more consistent. Wondering if you would consider implementing the class/reg back into the new method as an option so we can switch off the text encoding to it?
using the terms man or woman or art style as instance names will ruin the training
anyway, I just added concept images feature, a more suitable regularization for sd.
The new collab worked ONCE for me. I have spent the last two days going completely insane trying to troubleshoot it :( :( :(
what issue are you facing ?
Hi Ben Thank you so much for your quick reply. Right now I am totally baffled, but I promise I will return in a day or two with a detailed description, possibly for reproduction, maybe it can help someone else. I actually have it working now, but I used so many sessions on Google collab, I ran out, and had to buy some.
Now I have to sleep, heh.
Thanks again!
I kept getting errors before the initial generation of class images; "dreambooth something went wrong returned non-zero exit status 1" Couldn't see much helpful info in the rest of the error report, although there may have been.
After pulling out some 50 meters of hair and loosing 5 kg, I found it was related to the connection to Huggingface. At least that is my best guess. I'm having problems with my internet, and only have about 1,5mbit of bandwidth atm, so I cannot test.
When I removed the huggingface token, it worked. It asked for it once, and I supplied it. Next time, it didn't ask. I generated a new token before trying that, and tried a bunch of different SD models, and basically all other suggested solutions I could find on the net.
Sorry I can't get more specific right now.
Trying old method with default settings, got this error: