Acly / krita-ai-diffusion

Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required.
https://www.interstice.cloud
GNU General Public License v3.0
6.13k stars 291 forks source link

server execution error : could not allocate tensor with 52428800 bytes . There is not enough GPU video memory avaialble . Graphic Card - Asus rx6600 8gb vram , why is this happening? Any solution? #655

Closed ishanjaiswal2610 closed 3 months ago

ishanjaiswal2610 commented 3 months ago

Screenshot 2024-04-24 180717

Acly commented 3 months ago

When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB...

I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\

ishanjaiswal2610 commented 3 months ago

When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB...

I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\

Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ? If yes plss help me how can i use his gpu for my work @Acly

Sil3ntKn1ght commented 3 months ago

When I switch to DirectML a 640x512 image uses almost 12GB VRAM with latest version. I also think this was better at some point and fit into 8GB...

I'm not sure what I can do about it though, it doesn't look like anybody cares enough about AMD on Windows to improve the situation in ComfyUI :\

would this work for him? or anything like it.. and can we get a toggle in configuration to turn these on/off for those that get confused. i image with a note requires restarting

to work around this we can edit the setting.json found in C:\Users\PC\AppData\Roaming\krita\ai_diffusion

open settings.json with note pad, add too "server_arguments": "--force-fp16" or "--force-fp32" id recommend testing each to see whats faster for you. please comment below your card and how these work for you.

its should look something like this (note i'm testing --normalvram feel free to test and feedback in comments)

{ "server_mode": "managed", "server_path": "C:/Users/PC/AppData/Roaming/krita/pykrita/ai_diffusion/.server", "server_url": "127.0.0.1:8188", "server_backend": "cuda", "server_arguments": "--force-fp16 --normalvram", "selection_grow": 5, "selection_feather": 5, "selection_padding": 7, "new_seed_after_apply": false, "prompt_line_count": 2, "show_negative_prompt": true, "auto_preview": true, "show_control_end": false, "history_size": 1500, "history_storage": 100, "performance_preset": "low", "batch_size": 2, "resolution_multiplier": 1.0, "max_pixel_count": 2, "debug_dump_workflow": false }

Acly commented 3 months ago

add too "server_arguments": "--force-fp16" or "--force-fp32"

I don't think those do anything with DirectML. It always uses FP32 and doesn't support anything else.

Acly commented 3 months ago

Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ?

Yes you can, but a bit of networking knowledge is required, I can't give you a complete walkthrough here. Maybe you can find guides on the internet.

Steps are roughly:

  1. Setup the server on your friends PC. Either install Krita+Plugin there and go through the installer, or copy the server folder from your PC might work. Or install ComfyUI+dependencies manually.
  2. Run the server from command line with the --listen argument
  3. If you want to connect to your friend's PC over the internet, you either need to do port forwarding (security risk, not recommended), or use a reverse proxy like nginx, or a VPN.
Sil3ntKn1ght commented 3 months ago

Can i use my friend's gpu to use krta he has rtx 3060 , but he is not living with me can i use his gpu ?

Yes you can, but a bit of networking knowledge is required, I can't give you a complete walkthrough here. Maybe you can find guides on the internet.

Steps are roughly:

1. Setup the server on your friends PC. Either install Krita+Plugin there and go through the installer, or copy the server folder from your PC might work. Or install ComfyUI+dependencies manually.

2. Run the server from command line with the `--listen` argument

3. If you want to connect to your friend's PC over the internet, you either need to do port forwarding (security risk, _not_ recommended), or use a reverse proxy like [nginx](https://www.nginx.com/), or a VPN.

omg i need a video or guide so i can use this on my local network and use my lounge pc from my desk pc.

Acly commented 3 months ago

It might also be possible to use AMD on Windows with ZLUDA. It should be much better performance and memory efficiency. There is a ComfyUI fork. But you will have to set it all up yourself and I can't say if everything is supported. It's not possible to test it without actually having AMD hardware.