rockerBOO / lora-inspector

LoRA (Low-Rank Adaptation) inspector for Stable Diffusion
MIT License
80 stars 4 forks source link

parameter question #6

Open robertJene opened 1 year ago

robertJene commented 1 year ago

Since you referenced Zyin055's repo, "Inspect-Embedding-Training"

Which output parameter shows me that the strength is too high (overtrained / inflexible)?

I use his script for embeddings with great success.

Forgive me if I sound like a noob, but which value am I looking for? On what parameter?

For the referenced embedding script, we look for the strength to not get over 0.2

rockerBOO commented 1 year ago

I think it is a little tougher to analyze because the number of vectors is a lot more than an embedding. I can't say which is best or if I am effectively getting the proper number.

For example, I rarely see the strength above 0.01x. Magnitude seems to be closer to the embedding values (larger magnitude might make it less flexible). I haven't done enough analysis of these numbers to make any estimates, though.

robertJene commented 1 year ago

OK what is UNET vs Text Encoder in the output? EDIT: I am working on a YouTube video and I will be referencing your script because it's the only one so far

robertJene commented 1 year ago

@rockerBOO I'm going to be doing a video about LoRA's and will be using your python script. Do you have a twitter/instagram/facebook/YouTube for me to plug, or do you just want me to use this repo?

rockerBOO commented 1 year ago

You can reference this repo is fine.

For your previous UNet vs text encoder, the model is made up of different neural networks that are used together to make images like text (they should create similar embeddings). UNet for the pixel/image part in the generation and text encoder to get the embeddings to draw with.

You can train only the UNet and only the Text Encoder, or both. I separated them, so it would be clear and also to show any cases where the Text Encoder or UNet has much higher magnitudes.

robertJene commented 1 year ago

Thank you for your response. What you say makes sense because this week was studying Loha and Locan (LyCORIS) which do both Unet and text encoder