gradio-app / gradio

Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
http://www.gradio.app
Apache License 2.0
33.31k stars 2.51k forks source link

TypeError: 'EventListenerMethod' object is not iterable #6084

Open SingL3 opened 12 months ago

SingL3 commented 12 months ago

Describe the bug

I am trying to add a Textbox to the ChatInterface for langchain application. I add a self.context_textbox to the class and add it to the output of the submit chain:

https://github.com/gradio-app/gradio/blob/8241f9a7bd034256aabb9efa9acb9e36216557ac/gradio/chat_interface.py#L255-L277

       submit_event = (
            self.textbox.submit(
                self._clear_and_save_textbox,
                [self.textbox],
                [self.textbox, self.saved_input],
                api_name=False,
                queue=False,
            )
            .then(
                self._display_input,
                [self.saved_input, self.chatbot_state],
                [self.chatbot, self.chatbot_state],
                api_name=False,
                queue=False,
            )
            .then(
                submit_fn,
                [self.saved_input, self.chatbot_state] + self.additional_inputs,
  +              [self.chatbot, self.chatbot_state, self.context_textbox],
                api_name=False,
            )
        )
+        self._setup_stop_events(self.textbox.submit, submit_event, submit=True)

And I have modified the submit_fn to yeild 3 outputs: https://github.com/gradio-app/gradio/blob/8241f9a7bd034256aabb9efa9acb9e36216557ac/gradio/chat_interface.py#L395-L445

    async def _submit_fn(
        self,
        message: str,
        history_with_input: list[list[str | None]],
        *args,
    ) -> tuple[list[list[str | None]], list[list[str | None]]]:
        history = history_with_input[:-1]
        if self.is_async:
            response = await self.fn(message, history, *args)
        else:
            response = await anyio.to_thread.run_sync(
                self.fn, message, history, *args, limiter=self.limiter
            )
        response, context = self._split_context(response)
        history.append([message, response])
        return history, history, context

    async def _stream_fn(
        self,
        message: str,
        history_with_input: list[list[str | None]],
        *args,
    ) -> AsyncGenerator:
        history = history_with_input[:-1]
        if self.is_async:
            generator = self.fn(message, history, *args)
        else:
            generator = await anyio.to_thread.run_sync(
                self.fn, message, history, *args, limiter=self.limiter
            )
            generator = SyncToAsyncIterator(generator, self.limiter)
        try:
            context = ""
            first_response = await async_iteration(generator)
            first_response, context = self._split_context(first_response)
            update = history + [[message, first_response]]
            yield update, update, context
        except StopIteration:
            message, _ = self._split_context(message)
            update = history + [[message, None]]
            yield update, update, context
        async for response in generator:
            response, _ = self._split_context(response)
            update = history + [[message, response]]
            yield update, update, context

And I got the following error when launch:

Traceback (most recent call last):
  File "/mnt/home/xxxxxx/llm/llama-2-7b-chat/langchain_gradio.py", line 681, in <module>
    chat_interface = LangChainChatInterface(  # gr.ChatInterface(
  File "/mnt/home/xxxxxx/llm/llama-2-7b-chat/langchain_gradio.py", line 236, in __init__
    self._setup_events()
  File "/mnt/home/xxxxxx/llm/llama-2-7b-chat/langchain_gradio.py", line 311, in _setup_events
    self._setup_stop_events(self.textbox.submit, submit_event)
  File "/mnt/data/conda/envs/xxxxxx/langchain/lib/python3.10/site-packages/gradio/chat_interface.py", line 332, in _setup_stop_events
    for event_trigger in event_triggers:
TypeError: 'EventListenerMethod' object is not iterable

However, I modified the _setup_stop_events function to:

def _setup_stop_events(
+        self, event_trigger: EventListenerMethod, event_to_cancel: Dependency, submit: bool = False
    ) -> None:
        if self.stop_btn and self.is_generator:
            if self.submit_btn:
                event_trigger(
                    lambda: (Button.update(visible=False), Button.update(visible=True)),
                    None,
                    [self.submit_btn, self.stop_btn],
                    api_name=False,
                    queue=False,
                )
                event_to_cancel.then(
                    lambda: (Button.update(visible=True), Button.update(visible=False)),
                    None,
                    [self.submit_btn, self.stop_btn],
                    api_name=False,
                    queue=False,
                )
            else:
                event_trigger(
                    lambda: Button.update(visible=True),
                    None,
                    [self.stop_btn],# if not submit else [self.stop_btn, self.context_textbox],
                    api_name=False,
                    queue=False,
                )
                event_to_cancel.then(
                    lambda: Button.update(visible=False),
                    None,
                    [self.stop_btn],# if not submit else [self.stop_btn, self.context_textbox],
                    api_name=False,
                    queue=False,
                )
            self.stop_btn.click(
                None,
                None,
                None,
                cancels=event_to_cancel,
                api_name=False,
            )

The only difference with the original function is that there is a submit in the args and it can run successfully. Why is that?

Have you searched existing issues? 🔎

Reproduction

class LangChainChatInterface(gr.ChatInterface):
    def __init__(
        self,
        context_textbox,
        fn: Callable,
        *,
        chatbot: Chatbot | None = None,
        textbox: Textbox | None = None,
        additional_inputs: str | IOComponent | list[str | IOComponent] | None = None,
        additional_inputs_accordion_name: str = "Additional Inputs",
        examples: list[str] | None = None,
        cache_examples: bool | None = None,
        title: str | None = None,
        description: str | None = None,
        theme: Theme | str | None = None,
        css: str | None = None,
        analytics_enabled: bool | None = None,
        submit_btn: str | None | Button = "Submit",
        stop_btn: str | None | Button = "Stop",
        retry_btn: str | None | Button = "🔄  Retry",
        undo_btn: str | None | Button = "↩ī¸ Undo",
        clear_btn: str | None | Button = "🗑ī¸  Clear",
        autofocus: bool = True,
    ):
        """
        Parameters:
            fn: the function to wrap the chat interface around. Should accept two parameters: a string input message and list of two-element lists of the form [[user_message, bot_message], ...] representing the chat history, and return a string response. See the Chatbot documentation for more information on the chat history format.
            chatbot: an instance of the gr.Chatbot component to use for the chat interface, if you would like to customize the chatbot properties. If not provided, a default gr.Chatbot component will be created.
            textbox: an instance of the gr.Textbox component to use for the chat interface, if you would like to customize the textbox properties. If not provided, a default gr.Textbox component will be created.
            additional_inputs: an instance or list of instances of gradio components (or their string shortcuts) to use as additional inputs to the chatbot. If components are not already rendered in a surrounding Blocks, then the components will be displayed under the chatbot, in an accordion.
            additional_inputs_accordion_name: the label of the accordion to use for additional inputs, only used if additional_inputs is provided.
            examples: sample inputs for the function; if provided, appear below the chatbot and can be clicked to populate the chatbot input.
            cache_examples: If True, caches examples in the server for fast runtime in examples. The default option in HuggingFace Spaces is True. The default option elsewhere is False.
            title: a title for the interface; if provided, appears above chatbot in large font. Also used as the tab title when opened in a browser window.
            description: a description for the interface; if provided, appears above the chatbot and beneath the title in regular font. Accepts Markdown and HTML content.
            theme: Theme to use, loaded from gradio.themes.
            css: custom css or path to custom css file to use with interface.
            analytics_enabled: Whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable if defined, or default to True.
            submit_btn: Text to display on the submit button. If None, no button will be displayed. If a Button object, that button will be used.
            stop_btn: Text to display on the stop button, which replaces the submit_btn when the submit_btn or retry_btn is clicked and response is streaming. Clicking on the stop_btn will halt the chatbot response. If set to None, stop button functionality does not appear in the chatbot. If a Button object, that button will be used as the stop button.
            retry_btn: Text to display on the retry button. If None, no button will be displayed. If a Button object, that button will be used.
            undo_btn: Text to display on the delete last button. If None, no button will be displayed. If a Button object, that button will be used.
            clear_btn: Text to display on the clear button. If None, no button will be displayed. If a Button object, that button will be used.
            autofocus: If True, autofocuses to the textbox when the page loads.
        """
        super(gr.ChatInterface, self).__init__(
            analytics_enabled=analytics_enabled,
            mode="chat_interface",
            css=css,
            title=title or "Gradio",
            theme=theme,
        )
        self.fn = fn
        self.is_async = inspect.iscoroutinefunction(
            self.fn
        ) or inspect.isasyncgenfunction(self.fn)
        self.is_generator = inspect.isgeneratorfunction(
            self.fn
        ) or inspect.isasyncgenfunction(self.fn)
        self.examples = examples
        if self.space_id and cache_examples is None:
            self.cache_examples = True
        else:
            self.cache_examples = cache_examples or False
        self.buttons: list[Button] = []

        if additional_inputs:
            if not isinstance(additional_inputs, list):
                additional_inputs = [additional_inputs]
            self.additional_inputs = [
                get_component_instance(i, render=False) for i in additional_inputs  # type: ignore
            ]
        else:
            self.additional_inputs = []
        self.additional_inputs_accordion_name = additional_inputs_accordion_name

        with self:
            if title:
                Markdown(
                    f"<h1 style='text-align: center; margin-bottom: 1rem'>{self.title}</h1>"
                )
            if description:
                Markdown(description)

            with Column(variant="panel"):
                if chatbot:
                    self.chatbot = chatbot.render()
                else:
                    self.chatbot = Chatbot(label="Chatbot")

                with Group():
                    with Row():
                        if textbox:
                            textbox.container = False
                            textbox.show_label = False
                            self.textbox = textbox.render()
                        else:
                            self.textbox = Textbox(
                                container=False,
                                show_label=False,
                                label="Message",
                                placeholder="Type a message...",
                                scale=7,
                                autofocus=autofocus,
                            )
                        if submit_btn:
                            if isinstance(submit_btn, Button):
                                submit_btn.render()
                            elif isinstance(submit_btn, str):
                                submit_btn = Button(
                                    submit_btn,
                                    variant="primary",
                                    scale=1,
                                    min_width=150,
                                )
                            else:
                                raise ValueError(
                                    f"The submit_btn parameter must be a gr.Button, string, or None, not {type(submit_btn)}"
                                )
                        if stop_btn:
                            if isinstance(stop_btn, Button):
                                stop_btn.visible = False
                                stop_btn.render()
                            elif isinstance(stop_btn, str):
                                stop_btn = Button(
                                    stop_btn,
                                    variant="stop",
                                    visible=False,
                                    scale=1,
                                    min_width=150,
                                )
                            else:
                                raise ValueError(
                                    f"The stop_btn parameter must be a gr.Button, string, or None, not {type(stop_btn)}"
                                )
                        self.buttons.extend([submit_btn, stop_btn])

                with Row():
                    for btn in [retry_btn, undo_btn, clear_btn]:
                        if btn:
                            if isinstance(btn, Button):
                                btn.render()
                            elif isinstance(btn, str):
                                btn = Button(btn, variant="secondary")
                            else:
                                raise ValueError(
                                    f"All the _btn parameters must be a gr.Button, string, or None, not {type(btn)}"
                                )
                        self.buttons.append(btn)

                    self.fake_api_btn = Button("Fake API", visible=False)
                    self.fake_response_textbox = Textbox(
                        label="Response", visible=False
                    )
                    (
                        self.submit_btn,
                        self.stop_btn,
                        self.retry_btn,
                        self.undo_btn,
                        self.clear_btn,
                    ) = self.buttons

            if examples:
                if self.is_generator:
                    examples_fn = self._examples_stream_fn
                else:
                    examples_fn = self._examples_fn

                self.examples_handler = Examples(
                    examples=examples,
                    inputs=[self.textbox] + self.additional_inputs,
                    outputs=self.chatbot,
                    fn=examples_fn,
                )

            if context_textbox:
                self.context_textbox = context_textbox.render()
            else:
                self.context_textbox = gr.TextArea(interactive=False, show_copy_button=True)

            any_unrendered_inputs = any(
                not inp.is_rendered for inp in self.additional_inputs
            )
            if self.additional_inputs and any_unrendered_inputs:
                with Accordion(self.additional_inputs_accordion_name, open=False):
                    for input_component in self.additional_inputs:
                        if not input_component.is_rendered:
                            input_component.render()

            # The example caching must happen after the input components have rendered
            if cache_examples:
                client_utils.synchronize_async(self.examples_handler.cache)

            self.saved_input = State()
            self.chatbot_state = State([])

            self._setup_events()
            self._setup_api()

    def _split_context(self, response):
        t = response.split("<|Bot|>")
        generation, context = t[1], t[0]
        return generation, context

    def _setup_stop_events(
        self, event_trigger: EventListenerMethod, event_to_cancel: Dependency, submit: bool = False
    ) -> None:
        if self.stop_btn and self.is_generator:
            if self.submit_btn:
                event_trigger(
                    lambda: (Button.update(visible=False), Button.update(visible=True)),
                    None,
                    [self.submit_btn, self.stop_btn],
                    api_name=False,
                    queue=False,
                )
                event_to_cancel.then(
                    lambda: (Button.update(visible=True), Button.update(visible=False)),
                    None,
                    [self.submit_btn, self.stop_btn],
                    api_name=False,
                    queue=False,
                )
            else:
                event_trigger(
                    lambda: Button.update(visible=True),gr.Textbox.update(value="")),
                    None,
                    [self.stop_btn],
                    api_name=False,
                    queue=False,
                )
                event_to_cancel.then(
                    lambda: Button.update(visible=False),
                    None,
                    [self.stop_btn],
                    api_name=False,
                    queue=False,
                )
            self.stop_btn.click(
                None,
                None,
                None,
                cancels=event_to_cancel,
                api_name=False,
            )

    def _setup_events(self) -> None:
        submit_fn = self._stream_fn if self.is_generator else self._submit_fn
        submit_event = (
            self.textbox.submit(
                self._clear_and_save_textbox,
                [self.textbox],
                [self.textbox, self.saved_input],
                api_name=False,
                queue=False,
            )
            .then(
                self._display_input,
                [self.saved_input, self.chatbot_state],
                [self.chatbot, self.chatbot_state],
                api_name=False,
                queue=False,
            )
            .then(
                submit_fn,
                [self.saved_input, self.chatbot_state] + self.additional_inputs,
                [self.chatbot, self.chatbot_state, self.context_textbox],
                api_name=False,
            )
        )
        self._setup_stop_events(self.textbox.submit, submit_event, submit=True)

        if self.submit_btn:
            click_event = (
                self.submit_btn.click(
                    self._clear_and_save_textbox,
                    [self.textbox],
                    [self.textbox, self.saved_input],
                    api_name=False,
                    queue=False,
                )
                .then(
                    self._display_input,
                    [self.saved_input, self.chatbot_state],
                    [self.chatbot, self.chatbot_state],
                    api_name=False,
                    queue=False,
                )
                .then(
                    submit_fn,
                    [self.saved_input, self.chatbot_state] + self.additional_inputs,
                    [self.chatbot, self.chatbot_state, self.context_textbox],
                    api_name=False,
                )
            )
            self._setup_stop_events(self.submit_btn.click, click_event)

        if self.retry_btn:
            retry_event = (
                self.retry_btn.click(
                    self._delete_prev_fn,
                    [self.chatbot_state],
                    [self.chatbot, self.saved_input, self.chatbot_state, self.context_textbox],
                    api_name=False,
                    queue=False,
                )
                .then(
                    self._display_input,
                    [self.saved_input, self.chatbot_state],
                    [self.chatbot, self.chatbot_state],
                    api_name=False,
                    queue=False,
                )
                .then(
                    submit_fn,
                    [self.saved_input, self.chatbot_state] + self.additional_inputs,
                    [self.chatbot, self.chatbot_state],
                    api_name=False,
                )
            )
            self._setup_stop_events(self.retry_btn.click, retry_event)

        if self.undo_btn:
            self.undo_btn.click(
                self._delete_prev_fn,
                [self.chatbot_state],
                [self.chatbot, self.saved_input, self.chatbot_state, self.context_textbox],
                api_name=False,
                queue=False,
            ).then(
                lambda x: x,
                [self.saved_input],
                [self.textbox],
                api_name=False,
                queue=False,
            )

        if self.clear_btn:
            self.clear_btn.click(
                lambda: ([], [], None, ""),
                None,
                [self.chatbot, self.chatbot_state, self.saved_input, self.context_textbox],
                queue=False,
                api_name=False,
            )

    async def _submit_fn(
        self,
        message: str,
        history_with_input: list[list[str | None]],
        *args,
    ) -> tuple[list[list[str | None]], list[list[str | None]]]:
        history = history_with_input[:-1]
        if self.is_async:
            response = await self.fn(message, history, *args)
        else:
            response = await anyio.to_thread.run_sync(
                self.fn, message, history, *args, limiter=self.limiter
            )
        response, context = self._split_context(response)
        history.append([message, response])
        return history, history, context

    async def _stream_fn(
        self,
        message: str,
        history_with_input: list[list[str | None]],
        *args,
    ) -> AsyncGenerator:
        history = history_with_input[:-1]
        if self.is_async:
            generator = self.fn(message, history, *args)
        else:
            generator = await anyio.to_thread.run_sync(
                self.fn, message, history, *args, limiter=self.limiter
            )
            generator = SyncToAsyncIterator(generator, self.limiter)
        try:
            context = ""
            first_response, context = await async_iteration(generator)
            # first_response, context = self._split_context(first_response)
            update = history + [[message, first_response]]
            yield update, update, context
        except StopIteration:
            # message, _ = self._split_context(message)
            update = history + [[message, None]]
            yield update, update, context
        async for response in generator:
            # response, _ = self._split_context(response)
            update = history + [[message, response]]
            yield update, update, context

    def _delete_prev_fn(
        self, history: list[list[str | None]]
    ) -> tuple[list[list[str | None]], str, list[list[str | None]]]:
        try:
            message, _ = history.pop()
        except IndexError:
            message = ""
        return history, message or "", history, ""

Screenshot

No response

Logs

Traceback (most recent call last):
  File "/mnt/home/xxxxxx/llm/llama-2-7b-chat/langchain_gradio.py", line 681, in <module>
    chat_interface = LangChainChatInterface(  # gr.ChatInterface(
  File "/mnt/home/xxxxxx/llm/llama-2-7b-chat/langchain_gradio.py", line 236, in __init__
    self._setup_events()
  File "/mnt/home/xxxxxx/llm/llama-2-7b-chat/langchain_gradio.py", line 311, in _setup_events
    self._setup_stop_events(self.textbox.submit, submit_event)
  File "/mnt/data/conda/envs/xxxxxx/langchain/lib/python3.10/site-packages/gradio/chat_interface.py", line 332, in _setup_stop_events
    for event_trigger in event_triggers:
TypeError: 'EventListenerMethod' object is not iterable

System Info

Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 3.47.1
gradio_client version: 0.6.0

------------------------------------------------
gradio dependencies in your environment:

aiofiles: 23.1.0
altair: 5.0.1
fastapi: 0.95.2
ffmpy: 0.3.1
gradio-client==0.6.0 is not installed.
httpx: 0.24.1
huggingface-hub: 0.17.3
importlib-resources: 6.0.1
jinja2: 3.1.2
markupsafe: 2.1.3
matplotlib: 3.7.2
numpy: 1.24.0
orjson: 3.9.5
packaging: 23.1
pandas: 2.0.0
pillow: 9.5.0
pydantic: 1.10.9
pydub: 0.25.1
python-multipart: 0.0.6
pyyaml: 6.0
requests: 2.31.0
semantic-version: 2.10.0
typing-extensions: 4.6.3
uvicorn: 0.23.2
websockets: 11.0.3
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.

gradio_client dependencies in your environment:

fsspec: 2023.6.0
httpx: 0.24.1
huggingface-hub: 0.17.3
packaging: 23.1
requests: 2.31.0
typing-extensions: 4.6.3
websockets: 11.0.3

Severity

I can work around it

abidlabs commented 3 days ago

Hi, apologies for the late follow up. We haven't had a chance to look into this issue, but the Gradio codebase has changed quite significantly since this issue was created. Could you let us know if this is still an issue in the latest version of Gradio (pip install --upgrade gradio)? Thanks!