Open tyler-suard-parker opened 2 months ago
Hello. Can I use tools while streaming? Autogen supports it, I was just wondering if CrewAI can do it too.
I didn't catch your meaning. If it involves executing a sequential task, where a corresponding agent carries out this task through interaction with a large-scale model, then it's not necessary to adopt a streaming approach since there's no need to display anything to the user.
@fatmind thank you for your response. Here is my meaning:
We set up a connection to the OpenAI API, and set streaming = True.
We set up a function in CrewAI called "get_sales_figures()"
The user sends a question, "How many pickle jars did our company sell last week?"
CrewAI sends its prompt and the question to OpenAI and the response is streamed back.
CrewAI catches the function call in the stream, get_sales_figures("last week")
Crew AI runs that function and receives a result
CrewAI sends that result and a prompt back to OpenAI for the final answer
OpenAI streams back the final answer to CrewAI, as CrewAI is streaming the same answer to a frontend.
I wanted to know if this is possible using CrewAI's current architecture.
I get. From the current code, it appears that CrewAI sends the question to OpenAI and waits to receive the complete final_output before proceeding to call the tool.
final_output: Any = None
if self.stream_runnable:
# Use streaming to make sure that the underlying LLM is invoked in a
# streaming
# fashion to make it possible to get access to the individual LLM tokens
# when using stream_log with the Agent Executor.
# Because the response from the plan is not a generator, we need to
# accumulate the output into final output and return that.
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
if final_output is None:
final_output = chunk
else:
final_output += chunk
else:
final_output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
return final_output
@fatmind thank you for your response. Here is my meaning:
We set up a connection to the OpenAI API, and set streaming = True.
We set up a function in CrewAI called "get_sales_figures()"
The user sends a question, "How many pickle jars did our company sell last week?"
CrewAI sends its prompt and the question to OpenAI and the response is streamed back.
CrewAI catches the function call in the stream, get_sales_figures("last week")
Crew AI runs that function and receives a result
CrewAI sends that result and a prompt back to OpenAI for the final answer
OpenAI streams back the final answer to CrewAI, as CrewAI is streaming the same answer to a frontend.
I wanted to know if this is possible using CrewAI's current architecture.
Therefore, it seems that this implementation is not possible.
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Hello. Can I use tools while streaming? Autogen supports it, I was just wondering if CrewAI can do it too.