Closed kingabzpro closed 1 month ago
@jlowin here is the colab notebook link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing
@jlowin here is the colab notebook link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing
I am facing the same issue in windows too.
This one is super weird, I get the error in the notebook too but not locally. It has something to do with LangChain being stuck on Pydantic v1 -- I will look into it.
@jlowin I am getting it in Windows too.
Resolved the subprocess issue in windows with : PREFECT_API_URL="http://127.0.0.1:4200/api"
But I got another issues:
File "C:\Users\abida\anaconda3\envs\py310\lib\site-packages\controlflow\agents\agent.py", line 308, in _run_model
{response.json(indent=2)}
File "C:\Users\abida\anaconda3\envs\py310\lib\site-packages\pydantic\main.py", line 1119, in json
raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.')
TypeError: `dumps_kwargs` keyword arguments are no longer supported.
ControlFlow version: 0.9.3
Prefect version: 3.0.2
LangChain Core version: 0.3.1
Python version: 3.10.14
Platform: Windows-10-10.0.22631-SP0
Path: C:\Users\abida\anaconda3\envs\py310\Lib
@kingabzpro thanks for bringing this up - it looks like the langchain core release yesterday (?) completely changed their Pydantic support, and so all the hoops we were jumping through to convert from Pydantic v1 to Pydantic v2 are breaking, so all CF installs for the last day or so are probably broken. I've pinned langchain core and will release this ASAP while I work on upgrading langchain support...
@jlowin thank you. I guess. I have to postpone the tutorial now.
CF 0.9.4 is out and pins langchain, and I've confirmed the CoLab notebook runs properly with GPT-4o
Ok and I understand why mini was erroring. You provided four tweets, but asked for the result to be either hate
or love
as a constrained choice (i.e. choose a single option from the provided labels). Note that this means you want the output of the entire task to either be either the single string "hate" or "love". GPT-4o was doing what you asked, and returning a single string answer. GPT-4o mini was attempting to run the task 4 times, once for each input, which isn't allowed (once a task is done it's done!).
Arguably neither was correct because in all cases you were providing 4 tweets but only asking for a single string label. To let the agent provide one label per tweet, you need to indicate that you expect a list as your result type. Here is how to do that:
from typing import Literal
# Set up a ControlFlow task to classify tweets
classifications = cf.run(
'Classify the tweets',
result_type=list[Literal['hate', 'love']],
agents=[classifier],
context=dict(tweets=tweets),
)
print(classifications)
This indicates "the result is a list of values selected from the literal hate, love" but doesn't say how many, allowing you to pass as many tweets as you like (4 in your case)
Since the output is now a list, and not forcing it to choose just one of many strings, both GPT-4 and GPT-4o will correctly classify all four tweets with a list of four responses.
I'm going to close the issue since the colab MRE is resolved. https://github.com/PrefectHQ/ControlFlow/issues/319 is open as a tracking issue for addressing the breaking changes that LangChain 0.3 introduced, thanks again from bringing that to my attention!
I've updated the documentation to (hopefully) be more clear on the difference between requesting a single label or multiple labels. In the future we can try to detect and optimize the multi-label case the same way we do the single-label classifier (which uses only a single token when possible).
PR: https://github.com/PrefectHQ/ControlFlow/pull/320/files
Live docs: https://controlflow.ai/patterns/task-results#a-list-of-labels
Thank you. Will be working all night to finish the tutorial. Currently, busy in a meeting.
@jlowin I think we have another issue.
here is the link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing#scrollTo=xlCP4aHroOaU
RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
That's a known issue with the ephemeral server the Prefect team is working on (cc @aaazzam), it's challenging to replicate because its stochastic -- essentially for some reason the server just takes a little too long to start so the first request fails. Running a separate server with prefect server start
will solve it for now (or retrying with an ephemeral server, which isn't as satisfying a response I know)
Got it. I guess. I wont be using Colab then.
@jlowin I am just sorry, you have to go thought so many of my issues in a day. That's why companies get scared of me. When I touch the product, I break it in so many ways.
At least with CF you're hitting onboarding issues I want to solve
@jlowin can you help me resolve this issue:
link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing#scrollTo=3XLz5wVbw2u7 the code:
import controlflow as cf
@cf.flow
def analyze_text(text: str):
# Create a parent task to represent the entire analysis
with cf.Task(
"Analyze the given text",
instructions="Include each subtask result in your result",
result_type=dict,
context={"text": text}
) as parent_task:
# Child task 1: Identify key terms
key_terms = cf.Task(
"Identify up to 10 key terms in the text",
result_type=list[str]
)
# Child task 2: Summarize (depends on key_terms)
summary = cf.Task(
"Summarize the text in one sentence",
result_type=str,
depends_on=[key_terms]
)
# Run the parent task, which will automatically run all child tasks
result = parent_task.run()
return result
# Execute the flow
text = """
Agentic workflow orchestration refers to the coordination of autonomous
agents within a structured workflow, allowing them to operate independently
while achieving a common objective. Unlike traditional workflows that rigidly
define tasks and dependencies, agentic workflows empower agents—typically
AI-driven—to make decisions, prioritize tasks, and collaborate dynamically.
Each agent in this system operates with a degree of autonomy, enabling it to
adapt to changing conditions, handle uncertainties, and optimize its own
actions within the broader workflow. This approach enhances flexibility and
scalability, making it particularly effective for complex, multi-step
processes where real-time adjustments and intelligent decision-making are
crucial. By leveraging agents with defined roles and responsibilities, agentic
workflows maintain structure while enabling innovation and responsiveness in
task execution.
"""
result = analyze_text(text)
print(result)
16:49:48.656 | ERROR | Task run 'Tool call: mark_task_0a14d134_successful' - Task run failed with exception: TypeError("Task.create_success_tool.<locals>.succeed() missing 1 required positional argument: 'result'") - Retries are exhausted
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 752, in run_context
yield self
File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 1302, in run_task_sync
engine.call_task_fn(txn)
File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 775, in call_task_fn
result = call_with_parameters(self.task.fn, parameters)
File "/usr/local/lib/python3.10/dist-packages/prefect/utilities/callables.py", line 206, in call_with_parameters
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/controlflow/tools/tools.py", line 62, in run
result = self.fn(**input)
TypeError: Task.create_success_tool.<locals>.succeed() missing 1 required positional argument: 'result'
16:49:48.702 | ERROR | Task run 'Tool call: mark_task_0a14d134_successful' - Finished in state Failed("Task run encountered an exception TypeError: Task.create_success_tool.<locals>.succeed() missing 1 required positional argument: 'result'"
So this is actually an OpenAI bug that we noticed last week:
Now I think that your flow actually ran correctly, because what happens is the agent messes up the first call because it passes no args, we tell it that it messed up, and it tries again. So your code block ran properly and result
has a value even though GPT-4 messes this up.
BUT nonetheless the tool call failed the first time, and that's the error that's being logged. That's probably a little too aggressive, and we want to tone that back - let me think through the way to control verbosity for transient errors like that. There's a debug setting cf.settings.tools_raise_on_error
that would ACTUALLY raise an error there, and an existing one cf.settings.tools_verbose
that doesn't attempt to catch this. I'll think on it.
If instead your flow actually raised an error here, instead of just logging an error, that would surprise me -- agents should re-attempt tool calls. The agent does have the ability to declare that the task failed but that would be a different error.
btw if that's the case this is actually one of my favorite examples for why a workflow framework matters for agents :) they don't always interact with the real world correctly, even within their own API!
@jlowin am I creating a case study for you😅😅🤫🤫
All I wanted was to test few things and write a detail review / tutorial about CF. But this is life. Nothing good come easy.
Description
My head hurts. I am just trying to run the getting started tutorial and first I was facing Jason handler error and now this:
Example Code
Version Information
Additional Context
I just want to write a simple tutorial for machine learning mastery and it is getting tough for me to even start.