Open Spirit-Catt opened 1 year ago
This
where
Hey, parallel support has not (yet) been implemented. All issues with label Enhancement are ready for (re-)evaluation and prioritisation of the backlog.
Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you. Thanks.
This
This
what does it mean?
Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you. Thanks.
@oldhand7
I assume that this comes down to an estimated 3-5 days of development effort.
Hi, @mashb1t , I would like to contact you on skype. Could you share your whatsapp number or skype id?
Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you. Thanks.
@oldhand7
- advanced params refactoring + prevent users from skipping/stopping other users tasks in queue #981 has to be merged to make each generation independent from the global variables in advanced parameters, so they are truly separated.
- a flag -multi-gpu has to be implemented, which then spawns an async worker process for each GPU
- the GPU has to be persistently assigned to each worker process and its processing, leading th basically the whole ldm_patches pipeline calls to be refactored, incl. model management and VRAM improvements.
I assume that this comes down to an estimated 3-5 days of development effort. I would like to contact you on skype. Could you share your whatsapp number or skype id?
@oldhand7 No. All communication regarding Fooocus should happen in this repository. You can open a discussion in the category "ideas" or "q&a" to exchange.
Sorry, I am afraid I did wrong. I would like to know whether it is possible and if so, how to implement. You mentioned 3 ~ 5 days of work will be expected. Are those on the way or just estimate? Can I have a look at the current work status?
@oldhand7 the last update of fooocus introduced a --multi-users flag, which currently has no effect. I assume that either ldm_patched is being worked on or this has been added as a general preparation for the future already. AFAIK there currently is no progresss on the feature, at least of what i can see in PRs/branches. The estimate is just a gut feeling, not planned yet.
Actually, I 've just tried to do it myself for multi processing, but I don't think I get the right point for this. I 've just changed webui.py for multi-threading, but It didn't work. May I ask what parts should be improved or What is the key part to be impleemented for this function? And Do I need to use other python libs or so? Do I have to changed whole structures? Maybe I would like to contribute to you. Thanks.
@oldhand7 Key part is to make the model management incl. all caches abd memory optimisations work for both one and multiple GPUs as well as handling multiple async_worker processes + yielding correctly to gradio. ldm_patched may also have to be changed.
- advanced params refactoring + prevent users from skipping/stopping other users tasks in queue #981 has to be merged to make each generation independent from the global variables in advanced parameters, so they are truly separated.
- a flag -multi-gpu has to be implemented, which then spawns an async worker process for each GPU
- the GPU has to be persistently assigned to each worker process and its processing, leading th basically the whole ldm_patches pipeline calls to be refactored, incl. model management and VRAM improvements.
I assume that this comes down to an estimated 3-5 days of development effort.
AFAIK, you mentioned here it needs less than 5 days of work. So Do you have any exact plan or idea to implement in right way? If so, could you share your idea? I mean I would like to collaborate with you on this point. Thanks for your understanding.
Can you help me with this implementation?
Is it possible? If so , Can I contribute to this implementation? Ofc, I may need your great hand.
hey @oldhand7, I dig your enthusiasm but I find your netiquette quite lacking -- please stop spamming the multitude of users subscribed to this issue and open a new discussion about this topic instead, as mashb1t suggested earlier.
last comment for me on this matter: continued in https://github.com/lllyasviel/Fooocus/discussions/2292 for anybody who wants to follow along
This
This
what does it mean?
That people want it to happen.
The software works fine with 1 gpu, but it completely ignores other. It would be nice if it could automaticly generate few images at the same time depending on the amount of gpus computer has.