Closed nikolaiusa closed 6 months ago
In 'Settings' tab, choose 'kd30' model and 'diffusers' pipeline. Apply settings and try to generate something. If it runs out of memory or runs too slow, try enabling sequential offloading (https://github.com/seruva19/kubin/wiki/Docs#kandinsky-3). I have 64 Gb RAM + RTX 3090, and cannot run diffusers-K3 without sequential offloading.
Another way is to switch to 'native' pipeline, which activates some optimizations by default and fits into 12 Gb VRAM.
I have an error cuda oom - maybe I need to enable some other options in the diffusers menu?
this version has been downloaded - https://huggingface.co/kandinsky-community/kandinsky-3
i turned on - Enable sequential CPU offload and I have this speed at 4090 - 768x768 32%|██████████████████████████▏ | 16/50 [00:58<01:49, 3.23s/it]
Is this normal generation speed?
With sequential offloading, yes, this is normal. You can also try running a 'native' pipeline, it might be faster.
can I use multiple gpu?
Unfortunately, this is not implemented.
With sequential offloading, yes, this is normal. You can also try running a 'native' pipeline, it might be faster.
yes I tried it, the native one works faster.
To start generating, i need the Internet - tell me, is it possible to use it offline? (probably to check versions or something like that)
raceback (most recent call last): File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\connection.py", line 198, in _new_conn sock = connection.create_connection( File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "I:\ANACONDA3\envs\KANDINSKIY\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001DA595C3D00>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)
native VS diffusers 1min VS 3min for 1gen - 1280-1024 - 50step on my ps
With sequential offloading, yes, this is normal. You can also try running a 'native' pipeline, it might be faster.
yes I tried it, the native one works faster.
To start generating, i need the Internet - tell me, is it possible to use it offline? (probably to check versions or something like that)
raceback (most recent call last): File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\connection.py", line 198, in _new_conn sock = connection.create_connection( File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "I:\ANACONDA3\envs\KANDINSKIY\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001DA595C3D00>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)
You need Internet on the first run to download all the models, but on subsequent runs it should work offline as well.
thanks for the help. I also have plans to try making lora on k2.2
I trained lora but I don’t see it - do I need to move the files?
i found this in file - config.default.yaml
lora_path: networks/lora;train/lora lora_prior_pattern: 'lora_prior.bin;lora_prior.safetensors' lora_decoder_pattern: 'lora_decoder.bin;lora_decoder.safetensors' autopair_lora_models: false
if i set - Convert to safetensors - i have error: File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\torch\serialization.py", line 416, in init super().init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'train/lora\lora_prior1-50\pytorch_model.bin'
I'm a little confused, could you help? what format should the lora be in and where should it be located?
1 - I prepared a dataset- dataset.csv 2 - its my parametrs ( 50 -step for test)
3 - after train i see this
lora prior training progress: 100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:09<00:00, 5.45it/s, lr=1e-5, step_loss=0.0418] lora prior training progress: 100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:09<00:00, 5.46it/s, lr=1e-5, step_loss=0.0418] training of LoRA prior model completed
4- and folders
5 -but i dont see lora here
The default folders for LoRA files are "networks/lora" or "train/lora". Prior LoRA weights must contain "lora_prior" in filename, and decoder weights must contain "lora_decoder" (it can be changed in 'kd-training' settings). So, rename "model.safetensors" to something like "lora_prior_model.safetensors" and it should appear in the list.
The default folders for LoRA files are "networks/lora" or "train/lora". Prior LoRA weights must contain "lora_prior" in filename, and decoder weights must contain "lora_decoder" (it can be changed in 'kd-training' settings). So, rename "model.safetensors" to something like "lora_prior_model.safetensors" and it should appear in the list.
thanks, it worked.
I wanted to ask a few questions. can you answer if it’s not difficult for you?
1 - Is it possible to regulate the power of lora?
2 - Is it possible to mix more than two images in mix mode?
3 - Kandinsky3 works without the Internet. but My Kandinsky 2.2 requires an Internet connection to start generation. controlnet 2.2 also (the first time when kubin is turned on)
1 - Is it possible to regulate the power of lora?
I used the diffusers implementation of the Kandinsky 2.2 pipeline, and it did not expose parameters for adjusting LoRA scale (at least at the time when I did that). I think it's not that difficult to augment it into the pipeline (with new versions of diffusers), but I haven't experimented much in this direction and it would likely require rewriting code responsible for using LoRA models.
2 - Is it possible to mix more than two images in mix mode?
Currently, not possible. Maybe later I'll implement it, would be pretty easy.
3 - Kandinsky3 works without the Internet. but My Kandinsky 2.2 requires an Internet connection to start generation. controlnet 2.2 also (the first time when kubin is turned on)
That's strange, I'll check it later.
mix for 4 images is my dream)
mix for 4 images is my dream)
OK, I'll put this feature into top of my TODO list :)
I have a 64ram and two cards 3090 (24 GB) + 4090 (24GB) I installed kubin and it works fine on k2.2
Can you write which version of K3 should I download and what settings should I make so that I could launch it?