seruva19 / kubin

Web-GUI for Kandinsky text-to-image diffusion models.
175 stars 18 forks source link

please tell me the instructions for starting k3. #172

Closed nikolaiusa closed 6 months ago

nikolaiusa commented 7 months ago

I have a 64ram and two cards 3090 (24 GB) + 4090 (24GB) I installed kubin and it works fine on k2.2

Can you write which version of K3 should I download and what settings should I make so that I could launch it?

seruva19 commented 7 months ago

In 'Settings' tab, choose 'kd30' model and 'diffusers' pipeline. Apply settings and try to generate something. If it runs out of memory or runs too slow, try enabling sequential offloading (https://github.com/seruva19/kubin/wiki/Docs#kandinsky-3). I have 64 Gb RAM + RTX 3090, and cannot run diffusers-K3 without sequential offloading.

Another way is to switch to 'native' pipeline, which activates some optimizations by default and fits into 12 Gb VRAM.

nikolaiusa commented 7 months ago

I have an error cuda oom - maybe I need to enable some other options in the diffusers menu?

2024-3-4 17-3-47 2024-3-4 17-1-50 2024-3-4 17-1-38 2024-3-4 17-1-28
nikolaiusa commented 7 months ago

this version has been downloaded - https://huggingface.co/kandinsky-community/kandinsky-3

nikolaiusa commented 7 months ago

i turned on - Enable sequential CPU offload and I have this speed at 4090 - 768x768 32%|██████████████████████████▏ | 16/50 [00:58<01:49, 3.23s/it]

nikolaiusa commented 7 months ago

Is this normal generation speed?

seruva19 commented 7 months ago

With sequential offloading, yes, this is normal. You can also try running a 'native' pipeline, it might be faster.

nikolaiusa commented 7 months ago

can I use multiple gpu?

seruva19 commented 7 months ago

Unfortunately, this is not implemented.

nikolaiusa commented 7 months ago

With sequential offloading, yes, this is normal. You can also try running a 'native' pipeline, it might be faster.

yes I tried it, the native one works faster.

To start generating, i need the Internet - tell me, is it possible to use it offline? (probably to check versions or something like that)

raceback (most recent call last): File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\connection.py", line 198, in _new_conn sock = connection.create_connection( File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "I:\ANACONDA3\envs\KANDINSKIY\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001DA595C3D00>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)

nikolaiusa commented 7 months ago

native VS diffusers 1min VS 3min for 1gen - 1280-1024 - 50step on my ps

seruva19 commented 7 months ago

With sequential offloading, yes, this is normal. You can also try running a 'native' pipeline, it might be faster.

yes I tried it, the native one works faster.

To start generating, i need the Internet - tell me, is it possible to use it offline? (probably to check versions or something like that)

raceback (most recent call last): File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\connection.py", line 198, in _new_conn sock = connection.create_connection( File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "I:\ANACONDA3\envs\KANDINSKIY\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001DA595C3D00>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)

You need Internet on the first run to download all the models, but on subsequent runs it should work offline as well.

nikolaiusa commented 7 months ago

20240305133842_A_kangaroo_holding_a_beer, thanks for the help. I also have plans to try making lora on k2.2

nikolaiusa commented 7 months ago

I trained lora but I don’t see it - do I need to move the files?

Снимок экрана (257)

2024-3-5 17-5-0
nikolaiusa commented 7 months ago

i found this in file - config.default.yaml

lora_path: networks/lora;train/lora lora_prior_pattern: 'lora_prior.bin;lora_prior.safetensors' lora_decoder_pattern: 'lora_decoder.bin;lora_decoder.safetensors' autopair_lora_models: false

nikolaiusa commented 7 months ago

if i set - Convert to safetensors - i have error: File "I:\ANACONDA3\envs\KANDINSKIY\lib\site-packages\torch\serialization.py", line 416, in init super().init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'train/lora\lora_prior1-50\pytorch_model.bin'

nikolaiusa commented 7 months ago

I'm a little confused, could you help? what format should the lora be in and where should it be located?

nikolaiusa commented 7 months ago

1 - I prepared a dataset- dataset.csv 2 - its my parametrs ( 50 -step for test)

2024-3-5 17-48-45

3 - after train i see this

lora prior training progress: 100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:09<00:00, 5.45it/s, lr=1e-5, step_loss=0.0418] lora prior training progress: 100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:09<00:00, 5.46it/s, lr=1e-5, step_loss=0.0418] training of LoRA prior model completed

4- and folders Снимок экрана (259) Снимок экрана (258)

5 -but i dont see lora here

2024-3-5 17-55-0
seruva19 commented 7 months ago

The default folders for LoRA files are "networks/lora" or "train/lora". Prior LoRA weights must contain "lora_prior" in filename, and decoder weights must contain "lora_decoder" (it can be changed in 'kd-training' settings). So, rename "model.safetensors" to something like "lora_prior_model.safetensors" and it should appear in the list.

nikolaiusa commented 7 months ago

The default folders for LoRA files are "networks/lora" or "train/lora". Prior LoRA weights must contain "lora_prior" in filename, and decoder weights must contain "lora_decoder" (it can be changed in 'kd-training' settings). So, rename "model.safetensors" to something like "lora_prior_model.safetensors" and it should appear in the list.

thanks, it worked.

nikolaiusa commented 7 months ago

I wanted to ask a few questions. can you answer if it’s not difficult for you?

1 - Is it possible to regulate the power of lora?

2 - Is it possible to mix more than two images in mix mode?

3 - Kandinsky3 works without the Internet. but My Kandinsky 2.2 requires an Internet connection to start generation. controlnet 2.2 also (the first time when kubin is turned on)

seruva19 commented 7 months ago

1 - Is it possible to regulate the power of lora?

I used the diffusers implementation of the Kandinsky 2.2 pipeline, and it did not expose parameters for adjusting LoRA scale (at least at the time when I did that). I think it's not that difficult to augment it into the pipeline (with new versions of diffusers), but I haven't experimented much in this direction and it would likely require rewriting code responsible for using LoRA models.

2 - Is it possible to mix more than two images in mix mode?

Currently, not possible. Maybe later I'll implement it, would be pretty easy.

3 - Kandinsky3 works without the Internet. but My Kandinsky 2.2 requires an Internet connection to start generation. controlnet 2.2 also (the first time when kubin is turned on)

That's strange, I'll check it later.

nikolaiusa commented 7 months ago

mix for 4 images is my dream)

seruva19 commented 7 months ago

mix for 4 images is my dream)

OK, I'll put this feature into top of my TODO list :)