Open Linaqruf opened 1 year ago
I can't think of anything right now but will hold it in mind as I'm using it. Mainly came to so thanks for creating this notebook. It's the best out there.
I wanted to try it out, but it had the same issue as the main one a bit earlier,
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py'
I really wanted to try out LoKr, so that's nice
same result here:
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py'
I wanted to try it out, but it had the same issue as the main one a bit earlier,
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py'
- But I glanced over the things, and personally I don't need the 5.5 Sampler, it usually generates crap anyways, and you could edit it regardless. While at that, I have never used anything from the huggingface bracket 7.
- Also, I would really one more like r34 or even some would like e926 or e621. IIRC, you also need to login into boorus to use more than 2 tags there, so You could also display that to avoid confusion.
- Could have a bit of information links on the 5.4 Optimizer section.
I really wanted to try out LoKr, so that's nice
I'm a noob, so how do I fix this bc I'm getting the same here....
Hey! I love the notebook and I have successfully released 20 LoHas using it. But I have a problem, at some point, the resume training option for LoRAs in the fine-tuner notebook broke and I have a big dataset of 960 (multi-concept theme) images that can't be trained in a single run. I need to train it to 20 epochs but after around 4 hours the colab limit is reached and it kicks me at epoch8. When I try to run with resume training it "loads" the LoRA but prints to the console output that no weight was loaded and starts training from scratch.
edit: The current alpha seems to have received a couple of pretty big changes and some of my comments are not applicable anymore. I still need to figure out how to configure my previous setup again.
I've only just started a training session and will need to evaluate the results later. Some things I noticed already, though:
-1
for min_snr_gamma
and noise_offset
seems to deactivate the corresponding functions. This isn't very transparent. For example, the description for min_snr_gamma
tells us that lower numbers have a stronger effect
which might confuse users as -1
is pretty low.noise_offset = 0.1
is recommended but it isn't set as default?NovelAI and all modern Stable Diffusion model trains at clip_skip = 2
. I assume this refers to SD2.0 / SD2.1 as I was under the impression that SD1.5 was using clip_skip = 1
? If that's correct then the description should maybe include that recommendation?On the positive side:
dataset_config.toml
👍(Some of these issues are probably not new to the Alpha)
I will add items as I find them.
Hello! The collab issues a warning about the execution of forbidden code and then disconnects. I chose the option: Linaqruf/kohya-trainer (forked repo, stable, optimized for colab use). Apparently Google understands that the Automatic111 code is being executed.
I feel like the quality of my notebook has been getting worse over the past three months, since I declared a hiatus from the Stable Diffusion Community in February. So, I want to make sure that I do the right thing before I release this to the
main
branch.The main points about this update are:
class_token is undefined
.["LoRA_LierLa", "LoRA_C3Lier", "DyLoRA_LierLa", "DyLoRA_C3Lier", "LoCon", "LoHa", "IA3", "LoKR"]
.LoRA_LierLa
andLoRA_C3Lier
so far, so I need your help with any recommended parameters.Sample Prompt Config
back.clone/push to Github
, as Huggingface Hub is already powerful.You can find the
alpha release
version at these links:You can also gives suggestion in this Issue. Thank you!