imrayya / stable-diffusion-webui-Prompt_Generator

An extension to AUTOMATIC1111 WebUI for stable diffusion which adds a prompt generator
MIT License
236 stars 20 forks source link

Latest A1111 broken it? #5

Closed futureengine-io closed 1 year ago

futureengine-io commented 1 year ago

Latest A1111 pull seems to cause issues for me with extension:

Exception encountered while attempting to generate prompt: local variable 'tokenizer' referenced before assignment Traceback (most recent call last): File "E:__workspace\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "E:__workspace\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1016, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "E:__workspace\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 945, in postprocess_data if predictions[i] is components._Keywords.FINISHED_ITERATING: IndexError: tuple index out of range Exception encountered while attempting to install tokenizer Generate new prompt from: "Basketball, nba, landscape, rim, 35mm, 8k" Exception encountered while attempting to generate prompt: local variable 'tokenizer' referenced before assignment Traceback (most recent call last): File "E:__workspace\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "E:__workspace\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1016, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "E:__workspace\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 945, in postprocess_data if predictions[i] is components._Keywords.FINISHED_ITERATING: IndexError: tuple index out of range

MalpoDeMalpis commented 1 year ago

Same problem here:

Exception encountered while attempting to install tokenizer Generate new prompt from: "a cat sitting on a chair" Exception encountered while attempting to generate prompt: local variable 'tokenizer' referenced before assignment Traceback (most recent call last): File "D:\Ai\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "D:\Ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1016, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "D:\Ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 945, in postprocess_data if predictions[i] is components._Keywords.FINISHED_ITERATING: IndexError: tuple index out of range

imrayya commented 1 year ago

When first starting up Stable-Diffusion-Webui, in the command lines, do you happen to see Installing Requirment of Prompt-Maker (after you can see the UI boot up)

Like this image

I've slightly tweaked the error handling the extension so if someone could pull the latest version of the extension (update the extension) and comment the new error here, that would be a great help as it seems like I can't recreate the issue on my machine

dnl13 commented 1 year ago

sadly looks still the same

Exception encountered while attempting to install tokenizer
Traceback (most recent call last):
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict
    output = await app.blocks.process_api(
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1016, in process_api    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 945, in postprocess_data
    if predictions[i] is components._Keywords.FINISHED_ITERATING:
IndexError: tuple index out of range
imrayya commented 1 year ago

Right now, I don't really have a clue of what going on. I don't think it has anything to do with A1111 update to the project. The extension only really interacts with it in very minor way (UI elements). The extension also relies on the WebUi to install Torch but that still in the requirements so it should be fine

I'll do a fresh install of the WebUI and the extension to see if I can replicate the bug on my end. But that would have to wait till tomorrow.

Can someone try commenting out (put a # before the line) line 86 in ./scripts/prompt_generator.py and see if that resolve the issue. I don't see how that could be the issue, but you never know.

If that doesn't work, next thing is to make a try and except block on import on line 4. Again, don't see how that the issue. But it is another possibility

dnl13 commented 1 year ago

Hey i tryed your recommendations with disabling the lines => didn work out for me

but after moving the generate_longer_prompt method to the top of the on_ui_tabs method it now works...

tryed to move it back into place "breaks" the extansion again ( for me )

also the error is maybe related to the webui because other extansions will come up with similar errors ( after a quick google search)

site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
etc

so on_ui_tabs looks something like this right now:

def on_ui_tabs():
    # Method to create the extended prompt
    def generate_longer_prompt(prompt, temperature, top_k, max_length, repetition_penalty, num_return_sequences, use_blacklist=False):
        try:
            tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
            tokenizer.add_special_tokens({'pad_token': '[PAD]'})
            model = GPT2LMHeadModel.from_pretrained('FredZhang7/distilgpt2-stable-diffusion-v2')
        except Exception as e:
            print(f"Exception encountered while attempting to install tokenizer")
            return gr.update(), f"Error: {e}"
        try:
            print(f"Generate new prompt from: \"{prompt}\"")
            input_ids = tokenizer(prompt, return_tensors='pt').input_ids
            output = model.generate(input_ids, do_sample=True, temperature=temperature,
                                    top_k=top_k, max_length=max_length,
                                    num_return_sequences=num_return_sequences,
                                    repetition_penalty=repetition_penalty,
                                    penalty_alpha=0.6, no_repeat_ngram_size=1,
                                    early_stopping=True)
            print("Generation complete!")
            tempString = ""
            if (use_blacklist):
                blacklist = get_list_blacklist()
            for i in range(len(output)):

                tempString += str(i+1)+": "+tokenizer.decode(
                    output[i], skip_special_tokens=True) + "\n"

                if (use_blacklist):
                    for to_check in blacklist:
                        tempString = re.sub(
                            to_check, "", tempString, flags=re.IGNORECASE)
                if (i == 0):
                    global result_prompt

            result_prompt = tempString
            print(result_prompt)

            return {results: tempString,
                    send_to_img2img: gr.update(visible=True),
                    send_to_txt2img: gr.update(visible=True),
                    results_col: gr.update(visible=True),
                    warning: gr.update(visible=True),
                    promptNum_col: gr.update(visible=True)
                    }
        except Exception as e:
            print(
                f"Exception encountered while attempting to generate prompt: {e}")
            return gr.update(), f"Error: {e}"

    # structure
    txt2img_prompt = modules.ui.txt2img_paste_fields[0][0]
    img2img_prompt = modules.ui.img2img_paste_fields[0][0]

    with gr.Blocks(analytics_enabled=False) as prompt_generator:
        with gr.Column():
            with gr.Row():
                promptTxt = gr.Textbox(
                    lines=2, elem_id="promptTxt", label="Start of the prompt")
        with gr.Column():
            with gr.Row():
                temp_slider = gr.Slider(
                    elem_id="temp_slider", label="Temperature", interactive=True, minimum=0, maximum=1, value=0.9)
                max_length_slider = gr.Slider(
                    elem_id="max_length_slider", label="Max Length", interactive=True, minimum=1, maximum=200, step=1, value=80)
                top_k_slider = gr.Slider(
                    elem_id="top_k_slider", label="Top K", value=8, minimum=1, maximum=20, interactive=True)
        with gr.Column():
            with gr.Row():
                repetition_penalty_slider = gr.Slider(
                    elem_id="repetition_penalty_slider", label="Repetition Penalty", value=1.2, minimum=0, maximum=10, interactive=True)
                num_return_sequences_slider = gr.Slider(
                    elem_id="num_return_sequences_slider", label="How Many To Generate", value=5, minimum=1, maximum=20, interactive=True, step=1)
        with gr.Column():
            with gr.Row():
                use_blacklist_checkbox = gr.Checkbox(label="Use blacklist?")
                gr.HTML(value="<center>Using <code>\".\extensions\stable-diffusion-webui-Prompt_Generator\\blacklist.txt</code>\".<br>It will delete any matches to the generated result (case insensitive).</center>")
        with gr.Column():
            with gr.Row():
                generateButton = gr.Button(
                    value="Generate", elem_id="generate_button")
        with gr.Column(visible=False) as results_col:
            results = gr.Text(
                label="Results", elem_id="Results_textBox", interactive=False)
        with gr.Column(visible=False) as promptNum_col:
            with gr.Row():
                promptNum = gr.Textbox(
                    lines=1, elem_id="promptNum", label="Send which prompt")
        with gr.Column():
            warning = gr.HTML(
                value="Select one number and send that prompt to txt2img or img2img", visible=False)
            with gr.Row():
                send_to_txt2img = gr.Button('Send to txt2img', visible=False)
                send_to_img2img = gr.Button('Send to img2img', visible=False)

        # events
        generateButton.click(fn=generate_longer_prompt, inputs=[promptTxt, temp_slider, top_k_slider, max_length_slider,repetition_penalty_slider, num_return_sequences_slider, use_blacklist_checkbox], outputs=[results, send_to_img2img, send_to_txt2img,results_col, warning, promptNum_col])

        send_to_img2img.click(add_to_prompt, inputs=[promptNum], outputs=[img2img_prompt])
        send_to_txt2img.click(add_to_prompt, inputs=[promptNum], outputs=[txt2img_prompt])
        send_to_txt2img.click(None, _js='switch_to_txt2img', inputs=None, outputs=None)
        send_to_img2img.click(None, _js="switch_to_img2img", inputs=None, outputs=None)
    return (prompt_generator, "Prompt Generator", "Prompt Generator"),
imrayya commented 1 year ago

Alright. I pushed those changes. Thanks for the help. I hope it been resolved for everyone else too. If the issue persists after updating the extension, please reopen the issue

Thank you @dnl13