crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.45k stars 2.54k forks source link

Does CrewAI allow using tools while streaming? #798

Open tyler-suard-parker opened 2 months ago

tyler-suard-parker commented 2 months ago

Hello. Can I use tools while streaming? Autogen supports it, I was just wondering if CrewAI can do it too.

fatmind commented 2 months ago

Hello. Can I use tools while streaming? Autogen supports it, I was just wondering if CrewAI can do it too.

I didn't catch your meaning. If it involves executing a sequential task, where a corresponding agent carries out this task through interaction with a large-scale model, then it's not necessary to adopt a streaming approach since there's no need to display anything to the user.

tyler-suard-parker commented 2 months ago

@fatmind thank you for your response. Here is my meaning:

We set up a connection to the OpenAI API, and set streaming = True.

We set up a function in CrewAI called "get_sales_figures()"

The user sends a question, "How many pickle jars did our company sell last week?"

CrewAI sends its prompt and the question to OpenAI and the response is streamed back.

CrewAI catches the function call in the stream, get_sales_figures("last week")

Crew AI runs that function and receives a result

CrewAI sends that result and a prompt back to OpenAI for the final answer

OpenAI streams back the final answer to CrewAI, as CrewAI is streaming the same answer to a frontend.

I wanted to know if this is possible using CrewAI's current architecture.

fatmind commented 1 month ago

I get. From the current code, it appears that CrewAI sends the question to OpenAI and waits to receive the complete final_output before proceeding to call the tool.

        final_output: Any = None
        if self.stream_runnable:
            # Use streaming to make sure that the underlying LLM is invoked in a
            # streaming
            # fashion to make it possible to get access to the individual LLM tokens
            # when using stream_log with the Agent Executor.
            # Because the response from the plan is not a generator, we need to
            # accumulate the output into final output and return that.
            for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
                if final_output is None:
                    final_output = chunk
                else:
                    final_output += chunk
        else:
            final_output = self.runnable.invoke(inputs, config={"callbacks": callbacks})

        return final_output
fatmind commented 1 month ago

@fatmind thank you for your response. Here is my meaning:

We set up a connection to the OpenAI API, and set streaming = True.

We set up a function in CrewAI called "get_sales_figures()"

The user sends a question, "How many pickle jars did our company sell last week?"

CrewAI sends its prompt and the question to OpenAI and the response is streamed back.

CrewAI catches the function call in the stream, get_sales_figures("last week")

Crew AI runs that function and receives a result

CrewAI sends that result and a prompt back to OpenAI for the final answer

OpenAI streams back the final answer to CrewAI, as CrewAI is streaming the same answer to a frontend.

I wanted to know if this is possible using CrewAI's current architecture.

Therefore, it seems that this implementation is not possible.

github-actions[bot] commented 3 days ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.