lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
37.98k stars 5.05k forks source link

Queue prompts #1664

Open LordMilutin opened 6 months ago

LordMilutin commented 6 months ago

Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.

Please let me know if this is feasible, as I think it would tremendously improve this app.

Thanks!

mashb1t commented 6 months ago

Hey, this is currently not possible in the UI but only via API. You can find examples and further information here: https://github.com/lllyasviel/Fooocus/issues/1259 & https://github.com/lllyasviel/Fooocus/issues/1496

LordMilutin commented 6 months ago

That's unfortunate, but thank you for the response. I'll try to deal with the API somehow, but I'm not the best when it comes to that hahaha.

docppp commented 6 months ago

I have created simple prompt queue (I'm thinking of creating PR for this, but not in this exact form as its kinda dump right now, but it works), you can give it a try. Remember to disable auto update on startup as it will overwrite those changes.

diff --git a/webui.py b/webui.py
index a5138abf..581fda95 100644
--- a/webui.py
+++ b/webui.py
@@ -23,6 +23,27 @@ from modules.ui_gradio_extensions import reload_javascript
 from modules.auth import auth_enabled, check_auth

+QUEUE = []
+
+
+def queue_add(*args):
+    QUEUE.append(args)
+
+
+def queue_start(*args):
+    if not QUEUE:
+        yield from generate_clicked(*args)
+        return
+    for arg in QUEUE:
+        yield from generate_clicked(*arg)
+    QUEUE.clear()
+    # To use every style in single prompt:
+    # for style in legal_style_names:
+    #     argss = list(args)
+    #     argss[2] = [style]
+    #     yield from generate_clicked(*argss)
+
+
 def generate_clicked(*args):
     import ldm_patched.modules.model_management as model_management

@@ -110,7 +131,8 @@ with shared.gradio_root:
                         shared.gradio_root.load(lambda: default_prompt, outputs=prompt)

                 with gr.Column(scale=3, min_width=0):
-                    generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row', elem_id='generate_button', visible=True)
+                    generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row_half', elem_id='generate_button', visible=True)
+                    add_to_queue = gr.Button(label="Add to queue", value="Add to queue (0)", elem_classes='type_row_half', elem_id='add_to_queue', visible=True)
                     load_parameter_button = gr.Button(label="Load Parameters", value="Load Parameters", elem_classes='type_row', elem_id='load_parameter_button', visible=False)
                     skip_button = gr.Button(label="Skip", value="Skip", elem_classes='type_row_half', visible=False)
                     stop_button = gr.Button(label="Stop", value="Stop", elem_classes='type_row_half', elem_id='stop_button', visible=False)
@@ -560,9 +582,13 @@ with shared.gradio_root:
         generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False), []), outputs=[stop_button, skip_button, generate_button, gallery]) \
             .then(fn=refresh_seed, inputs=[seed_random, image_seed], outputs=image_seed) \
             .then(advanced_parameters.set_all_advanced_parameters, inputs=adps) \
-            .then(fn=generate_clicked, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
+            .then(fn=queue_start, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
             .then(lambda: (gr.update(visible=True), gr.update(visible=False), gr.update(visible=False)), outputs=[generate_button, stop_button, skip_button]) \
-            .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed')
+            .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed') \
+            .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue])
+
+        add_to_queue.click(fn=queue_add, inputs=ctrls) \
+            .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue])

         for notification_file in ['notification.ogg', 'notification.mp3']:
             if os.path.exists(notification_file):
LordMilutin commented 6 months ago

I have created simple prompt queue (I'm thinking of creating PR for this, but not in this exact form as its kinda dump right now, but it works), you can give it a try. Remember to disable auto update on startup as it will overwrite those changes.

diff --git a/webui.py b/webui.py
index a5138abf..581fda95 100644
--- a/webui.py
+++ b/webui.py
@@ -23,6 +23,27 @@ from modules.ui_gradio_extensions import reload_javascript
 from modules.auth import auth_enabled, check_auth

+QUEUE = []
+
+
+def queue_add(*args):
+    QUEUE.append(args)
+
+
+def queue_start(*args):
+    if not QUEUE:
+        yield from generate_clicked(*args)
+        return
+    for arg in QUEUE:
+        yield from generate_clicked(*arg)
+    QUEUE.clear()
+    # To use every style in single prompt:
+    # for style in legal_style_names:
+    #     argss = list(args)
+    #     argss[2] = [style]
+    #     yield from generate_clicked(*argss)
+
+
 def generate_clicked(*args):
     import ldm_patched.modules.model_management as model_management

@@ -110,7 +131,8 @@ with shared.gradio_root:
                         shared.gradio_root.load(lambda: default_prompt, outputs=prompt)

                 with gr.Column(scale=3, min_width=0):
-                    generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row', elem_id='generate_button', visible=True)
+                    generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row_half', elem_id='generate_button', visible=True)
+                    add_to_queue = gr.Button(label="Add to queue", value="Add to queue (0)", elem_classes='type_row_half', elem_id='add_to_queue', visible=True)
                     load_parameter_button = gr.Button(label="Load Parameters", value="Load Parameters", elem_classes='type_row', elem_id='load_parameter_button', visible=False)
                     skip_button = gr.Button(label="Skip", value="Skip", elem_classes='type_row_half', visible=False)
                     stop_button = gr.Button(label="Stop", value="Stop", elem_classes='type_row_half', elem_id='stop_button', visible=False)
@@ -560,9 +582,13 @@ with shared.gradio_root:
         generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False), []), outputs=[stop_button, skip_button, generate_button, gallery]) \
             .then(fn=refresh_seed, inputs=[seed_random, image_seed], outputs=image_seed) \
             .then(advanced_parameters.set_all_advanced_parameters, inputs=adps) \
-            .then(fn=generate_clicked, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
+            .then(fn=queue_start, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
             .then(lambda: (gr.update(visible=True), gr.update(visible=False), gr.update(visible=False)), outputs=[generate_button, stop_button, skip_button]) \
-            .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed')
+            .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed') \
+            .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue])
+
+        add_to_queue.click(fn=queue_add, inputs=ctrls) \
+            .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue])

         for notification_file in ['notification.ogg', 'notification.mp3']:
             if os.path.exists(notification_file):

Interesting. You shouldn't have copied git reff, etc, just plain webui.sh. 😆

blablablazhik commented 6 months ago

Hello! I'm sorry for being a noobie, but can u explain where exactly I have to add this code? I've disabled auto-updates by #1751 but didn't get where to add queue.

docppp commented 6 months ago

you can use git apply, but since you ask, i supposed you will do it by hand, so... + add the start means add this line, - means remove. you can tell where to look at by checking lines without any sign, so for example

 from modules.auth import auth_enabled, check_auth

+QUEUE = []

you can tell that you must add QUEUE = [] 3 lines below existing line from modules.auth import auth_enabled, check_auth. the file you must modify is webui.py

ignore all metadata parts and all lines starting with @@ like this:

diff --git a/webui.py b/webui.py
index a5138abf..581fda95 100644
--- a/webui.py
+++ b/webui.py
@@ -23,6 +23,27 @@ from modules.ui_gradio_extensions import reload_javascript
xhoxye commented 6 months ago

This can be done, but you need to modify the original code, use wildcard files to store prompt word lists, modify the code to switch wildcard reading order and random reading, and set the number of images generated at one time.

xhoxye commented 6 months ago

Here's another way. https://github.com/lllyasviel/Fooocus/pull/1503

xhoxye commented 6 months ago

That's the way I said. https://github.com/lllyasviel/Fooocus/pull/1761

simartem commented 6 months ago

you can use git apply, but since you ask...

Can you tell if there is an easier way to implenet this code to generate all styles in order ? with "git apply"

docppp commented 6 months ago

See https://github.com/lllyasviel/Fooocus/discussions/1751

blablablazhik commented 6 months ago

@docppp with "git apply" somehow wrote "corrupt patch at line 7" (used VScode and GitBash). I wrote by hand and it works! Notice that in my webui.py different coordinates (in urs "@@-560,9" in my it starts at 582). Maybe with last update it moved. (I can be easy wrong with anything bcs started learn only for this task)

Anyway, wrote by hand and it works!

Wanna ask is there anyways to add "Prompts from file or textbox" script in Fooocus UI? I have .txt file with 50 prompts on each line.

docppp commented 6 months ago

Im writing it from head, so this may need some tweaks but if you replace body of a queue_start function with following, it should work:

with open(text_file_path, 'r') as text_file:
  lines = text_file.readlines()
  for prompt in lines:
    argss = list(args)
    argss[0] = prompt
    yield from generate_clicked(*argss)

@mashb1t (longshoot, but @lllyasviel as well) I dont want to open another issue, but I would like to bring it to your attention once again (since you are kind of active lately ;)) I wrote I'm thinking of creating PR for this, but not in this exact form as its kinda dump right now, but as I investigate a little bit more what are the options, the dumbest solution are sometimes the best ones. This queue on the webui level gives best flexibility as you can select any prompt, any style, even any model every time and its remember them all. Downsize of this solution is that every time models need to be loaded from scratch. From what i checked, queue could be introduced in async_worker as well, so model will be loaded, but you loose ability to change them in between generations. Im just not sure what solution would suit here best. Any thoughts?

blablablazhik commented 6 months ago

@docppp it works, thank you! I've add only way to file:

def queue_start(*args):
    text_file_path = 'C:/Users/blablablazhik/Desktop/Test.txt'

    with open(text_file_path, 'r') as text_file:
        lines = text_file.readlines()
    for prompt in lines:
        argss = list(args)
        argss[0] = prompt.strip()  
        yield from generate_clicked(*argss)

But after test got one issue. "Fooocus V2" turns off after first prompt so "Fooocus V2 Expansion" doesn't write on 2nd, 3nd prompts. Maybe you know why ?

mashb1t commented 6 months ago

@docppp your queue proposal does indeed provide flexibility, but for queueing a few more things have to be considered:

AFAIK Gradio was originally implemented based on the assumption that it is used on a private machine for personal use to make it as easy as possible for users to generate images (135#comment, 501 and 713 supporting this claim), for which it works great, , with multi-user capabilities as an afterthought (see points above). The community now more and more patches the code to make it work for other scenarios better.

It's hard to evaluate the full picture here without knowing the plans for the future of Fooocus. To be specific, i'd propose to implement this feature the "right" way, not by using a global but a state (like state_is_generating, which might also have other issues btw...) so it is separated per user and somehow a max queue size per user, as the addition of this feature has the potential to keep an instance hostage by queueing basically infinite times max_image_number (maybe by adding an argument for maximum parallel queue_tasks_per_user or similar).

I really like the approach you took and would like to offer help to optimise the code to fulfill above mentioned points. This will most likely be an advanced feature, so we might also hide it at first and not show the buttons directly before activating a checkbox in Developer Debug Mode.

Let's also hear the opinion of other users. Your suggestions are welcome!

LordMilutin commented 6 months ago

Thanks for the thorough comment Mashb1t! It excellently highlights some concerns like infinite queue and clogging the GPU. However, I don't think I have seen Fooocus for multiple users anywhere? Right now, I believe people download it and use it locally on their machines, just like I'm doing it.

Also, docppp should provide some examples of how to use this feature, as I have tried it but it doesn't work. It isn't documented very well, but I am willing to add step-by-step tutorials for other users and how should they use the queue if someone just shows me the essentials to make it work.

This all looks very promising, so I am more than willing to help in areas I can.

docppp commented 6 months ago

Tbh, I didnt even consider multi user scenario. As you said "assumption that it is used on a private machine for personal use", but if Fooocus is put into multi user direction, then indeed, queue system should be well thought thru.

I dont quite understand "default queue size" and "Gradio output" points. This type of queue basically simulates setting the option, typing the prompt and clicking generate one by one by user. If you referring to the gallery being shown after generation, adding a limit is a very simple solution.

I have prepared more clean version of my idea here https://github.com/lllyasviel/Fooocus/pull/1773

@LordMilutin The main idea is to create some sort of object that will remember everything you set up to the clicking of Queue button. It will be stored, so you can modify prompt or options, click once again and now you will have 2 sets of parameters stored. Clicking Generate will run as normal, but several times with those exact parameters.

mashb1t commented 6 months ago

@LordMilutin quick reference for multi user scenarios: https://github.com/lllyasviel/Fooocus/discussions/1639, https://github.com/lllyasviel/Fooocus/issues/1771, https://github.com/lllyasviel/Fooocus/issues/1607, all API issues like https://github.com/lllyasviel/Fooocus/issues/1224 or https://github.com/lllyasviel/Fooocus/issues/1259 etc. ^^

blablablazhik commented 6 months ago

@docppp just tested #1773 - works exactly like I need for pipeline! Can I ask u for help to add this code with ability to read txt file with multiply prompts on each line? I've done that u wrote, but got missed up with setting of style after first prompt.

LordMilutin commented 3 months ago

This isn't in 2.3.0...

E2GO commented 3 months ago

Didn't find it too((

mashb1t commented 3 months ago

My bad, accidentally referred to in milestone and automatically closed.

LordMilutin commented 3 months ago

My bad, accidentally referred to in milestone and automatically closed.

Ah bummer, I looked forward to it in this release. Any ETA when it will be implemented in the release?

mashb1t commented 3 months ago

@LordMilutin no, no ETA. I'll also be out for the next 2-3 weeks, feel free to check out the PR and make improvements based on it.

pkdvalis commented 1 month ago

Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.

Not technically a queue system but you can achieve something similar by putting the 20 prompts onto 20 lines of a wildcard file and triggering that using the __wildcards__ syntax.

vytaux commented 1 month ago

Is this being worked on?

vytaux commented 1 month ago

Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.

Not technically a queue system but you can achieve something similar by putting the 20 prompts onto 20 lines of a wildcard file and triggering that using the __wildcards__ syntax.

Is there tutorial on this somewhere?

pkdvalis commented 1 month ago

Is there tutorial on this somewhere?

https://youtu.be/E_R7tnfXKCM?t=56

vytaux commented 1 month ago

Is there tutorial on this somewhere?

https://youtu.be/E_R7tnfXKCM?t=56

Thank you very much bro 😁

LordMilutin commented 1 week ago

Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.

Not technically a queue system but you can achieve something similar by putting the 20 prompts onto 20 lines of a wildcard file and triggering that using the __wildcards__ syntax.

Indeed, but I do not have control over it. If I put 20 prompts, and I run it 20 times, I can have the same prompt repeat multiple times, while some prompts will never trigger. That is why I am interested in queue mode that docpp made, and it worked perfectly before, but it is not compatible with newer versions of Fooocus. 😞

pkdvalis commented 1 week ago

Indeed, but I do not have control over it. If I put 20 prompts, and I run it 20 times, I can have the same prompt repeat multiple times, while some prompts will never trigger. That is why I am interested in queue mode that docpp made, and it worked perfectly before, but it is not compatible with newer versions of Fooocus. 😞

There is an option to control this: Advanced/Advanced/Dev Debug Mode/Debug Tools/"Read wildcards in order"

LordMilutin commented 1 week ago

Indeed, but I do not have control over it. If I put 20 prompts, and I run it 20 times, I can have the same prompt repeat multiple times, while some prompts will never trigger. That is why I am interested in queue mode that docpp made, and it worked perfectly before, but it is not compatible with newer versions of Fooocus. 😞

There is an option to control this: Advanced/Advanced/Dev Debug Mode/Debug Tools/"Read wildcards in order"

I will try it. So if I have a wildcard with 20 prompts, and I select 20 image number, will I get 20*20=400 photos after it's done or just 20 (one image per wildcard)?

For queue, it worked like the first option.

mashb1t commented 1 week ago

@LordMilutin you'll get 20 images, one for each line of the wildcard file.

LordMilutin commented 1 week ago

@LordMilutin you'll get 20 images, one for each line of the wildcard file.

I guessed so, that's why I prefer queue option, as I would leave it overnight and I would get 400 images.

You may argue that I can put 400 lines in a wildcard, but that is tedious, and I can get the same result, as seeds can repeat that way. 😢

mashb1t commented 1 week ago

@LordMilutin you may also set image number to 400 and it uses modulo on the wildcards file, so repeats your prompt every 20th iteration with a different seed if necessary. Just make sure not to check disable_seed_increment.

LordMilutin commented 1 week ago

@LordMilutin you may also set image number to 400 and it uses modulo on the wildcards file, so repeats your prompt every 20th iteration with a different seed if necessary. Just make sure not to check disable_seed_increment.

Any help on how to do that? I can set max 32 images in the UI.

mashb1t commented 1 week ago

@LordMilutin you can increase this value in the config.txt, please check out the config modification example file with all possible options. Make sure your last line before } doesn't end in a comma.