Closed christianarrer closed 5 months ago
How can i enable the
Thin Client Mode in the config json.
Thanks
Did changing it from the UI not work? There is a toggle checkbox on the settings page once the plugin is loaded. If that doesn't work the setting field should be generated in the config file by default and you can manually swap it to true.
It wont save it permanently even if i restart the ui in extensions tab. An other problem i encountered is, that it will not find the master worker if i launch stable diffusion with --subpath argument. Therefor i want to setup a linux thinclient version and have my windows machines as workers.
Have you tried launching with the debug argument and checking the logs? It could be a file permissions problem if the extension can't save to config.
Have you tried launching with the debug argument and checking the logs? It could be a file permissions problem if the extension can't save to config.
Okay i will check that
I set the permissions for the config.json to 0777 and added the nginx www-data user to my stable diffiusion user group called ai (but nginx is only proxing the stable diffusion ui that cannot be of any concern.). Not working ...
1) Can i somehow stop it from creating the "master" entry manually. 2) I noticed that the workers both will report out of memory after some time
ERROR - 'RM036' response: Code <500> {"error":"OutOfMemoryError","detail":"","body":"","errors":"CUDA out of memory. Tried to allocate 18.00 GiB. GPU 0 has a total capacty of 12.00 GiB of which 0 bytes is free. Of the allocated memory 31.97 GiB is allocated by PyTorch, and 126.05 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"} ERROR - 'RM104' response: Code <500> {"error":"OutOfMemoryError","detail":"","body":"","errors":"CUDA out of memory. Tried to allocate 2.94 GiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memory 6.93 GiB is allocated by PyTorch, and 122.04 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"}
3) Seems to work now ..., but it still wont save the images to output on the linux machine, but displays them. But it will save them if it runs only on CPU on the linux machine. 4) If i use one windows machine as master and the other one as worker, the worker seems to be idle all the time and it takes very long at "Distributed - receiving results 100%". It seems to repeat the wohle batch process on the worker instead of splitting it up ... It seems to me it needs a batch size of at least 2.
I noticed that the workers both will report out of memory after some time
If you set batch size high enough you will eventually run out of memory, even if your driver supports swapping.
It seems to me it needs a batch size of at least 2.
Setting batch size to 1 is very often a worst case for this extension. Setups where remote generation is significantly faster than local generation (and complement creation is enabled) though, mean that you will be saturating the faster remote card.
Irrespective of setup though, you'll have a better experience if you follow two main guidelines:
If possible, have your local/master instance be your fastest one. Doing this gives you the advantage of being able to see image previews more quickly and being able to fallback to normal webui behavior with less loss of throughput.
Increase batch size before batch count. This extension parallelizes batches rather than meshing multiple accelerators to generate each singular image.
It seems to work really good now. - I have one windows machine (the stronger one) as master and the second windows machine as worker. A Linux gui with the Thin client option works not very well. The first solution is the better one.
I did a time measurement and get nearly half the time with the additional worker.
How can i enable the
Thin Client Mode in the config json.
Thanks