PrefectHQ / ControlFlow

🦾 Take control of your AI agents
https://controlflow.ai
Apache License 2.0
738 stars 53 forks source link

`dumps_kwargs` keyword arguments are no longer supported. #314

Closed kingabzpro closed 1 month ago

kingabzpro commented 1 month ago

Description

My head hurts. I am just trying to run the getting started tutorial and first I was facing Jason handler error and now this:

─ Agent: Tweet Classifier ────────────────────────────────────────────────────────────────────────╮
│                                                                                                  │
│  ⠙ Tool call: "mark_task_1cbddcef_successful"                                                    │
│                                                                                                  │
│    Tool args: {'result': 0}                                                                      │
│  ⠙ Tool call: "mark_task_1cbddcef_successful"                                                    │
│                                                                                                  │
│    Tool args: {'result': 1}                                                                      │
│  ⠙ Tool call: "mark_task_1cbddcef_successful"                                                    │
│                                                                                                  │
│    Tool args: {'result': 1}                                                                      │
│  ⠙ Tool call: "mark_task_1cbddcef_successful"                                                    │
│                                                                                                  │
│    Tool args: {'result': 1}                                                                      │
│                                                                                                  │
╰──────────────────────────────────────────────────────────────────────────────────── 10:29:08 AM ─╯
10:29:08.785 | ERROR   | Task run 'Call LLM' - Task run failed with exception: TypeError('`dumps_kwargs` keyword arguments are no longer supported.') - Retries are exhausted
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 752, in run_context
    yield self
  File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 1371, in run_generator_task_sync
    gen_result = next(gen)
  File "/usr/local/lib/python3.10/dist-packages/controlflow/agents/agent.py", line 308, in _run_model
    {response.json(indent=2)}
  File "/usr/local/lib/python3.10/dist-packages/pydantic/main.py", line 1143, in json
    raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.')
TypeError: `dumps_kwargs` keyword arguments are no longer supported.
10:29:08.796 | ERROR   | Task run 'Call LLM' - Finished in state Failed('Task run encountered an exception TypeError: `dumps_kwargs` keyword arguments a

Example Code

import controlflow as cf

tweets = [
    "Negativity spreads too easily here. #sigh",
    "Sometimes venting is necessary. #HateTherapy",
    "Love fills the air today! 💖 #Blessed",
    "Thankful for all my Twitter friends! 🌟"
]

# Create a specialized agent 
classifier = cf.Agent(
    name="Tweet Classifier",
    model="openai/gpt-4o-mini",
    instructions="You are an expert at quickly classifying tweets.",
)
# Set up a ControlFlow task to classify tweets
classifications = cf.run(
    'Classify the tweets',
    result_type=['hate', 'love'],
    agents=[classifier],
    context=dict(tweets=tweets),
)

print(classifications)

Version Information

ControlFlow version: 0.9.3                              
       Prefect version: 3.0.2                              
LangChain Core version: 0.3.1                              
        Python version: 3.10.12                            
              Platform: Linux-6.1.85+-x86_64-with-glibc2.35
                  Path: /usr/local/lib/python3.10

Additional Context

I just want to write a simple tutorial for machine learning mastery and it is getting tough for me to even start.

kingabzpro commented 1 month ago

@jlowin here is the colab notebook link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing

kingabzpro commented 1 month ago

@jlowin here is the colab notebook link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing

I am facing the same issue in windows too.

jlowin commented 1 month ago

This one is super weird, I get the error in the notebook too but not locally. It has something to do with LangChain being stuck on Pydantic v1 -- I will look into it.

kingabzpro commented 1 month ago

@jlowin I am getting it in Windows too.

Resolved the subprocess issue in windows with : PREFECT_API_URL="http://127.0.0.1:4200/api"

But I got another issues:

  File "C:\Users\abida\anaconda3\envs\py310\lib\site-packages\controlflow\agents\agent.py", line 308, in _run_model
    {response.json(indent=2)}
  File "C:\Users\abida\anaconda3\envs\py310\lib\site-packages\pydantic\main.py", line 1119, in json       
    raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.')
TypeError: `dumps_kwargs` keyword arguments are no longer supported.
   ControlFlow version: 0.9.3
       Prefect version: 3.0.2
LangChain Core version: 0.3.1
        Python version: 3.10.14
              Platform: Windows-10-10.0.22631-SP0
                  Path: C:\Users\abida\anaconda3\envs\py310\Lib
jlowin commented 1 month ago

@kingabzpro thanks for bringing this up - it looks like the langchain core release yesterday (?) completely changed their Pydantic support, and so all the hoops we were jumping through to convert from Pydantic v1 to Pydantic v2 are breaking, so all CF installs for the last day or so are probably broken. I've pinned langchain core and will release this ASAP while I work on upgrading langchain support...

kingabzpro commented 1 month ago

@jlowin thank you. I guess. I have to postpone the tutorial now.

jlowin commented 1 month ago

CF 0.9.4 is out and pins langchain, and I've confirmed the CoLab notebook runs properly with GPT-4o

jlowin commented 1 month ago

Ok and I understand why mini was erroring. You provided four tweets, but asked for the result to be either hate or love as a constrained choice (i.e. choose a single option from the provided labels). Note that this means you want the output of the entire task to either be either the single string "hate" or "love". GPT-4o was doing what you asked, and returning a single string answer. GPT-4o mini was attempting to run the task 4 times, once for each input, which isn't allowed (once a task is done it's done!).

Arguably neither was correct because in all cases you were providing 4 tweets but only asking for a single string label. To let the agent provide one label per tweet, you need to indicate that you expect a list as your result type. Here is how to do that:

from typing import Literal

# Set up a ControlFlow task to classify tweets
classifications = cf.run(
    'Classify the tweets',
    result_type=list[Literal['hate', 'love']],
    agents=[classifier],
    context=dict(tweets=tweets),
)

print(classifications)

This indicates "the result is a list of values selected from the literal hate, love" but doesn't say how many, allowing you to pass as many tweets as you like (4 in your case)

Since the output is now a list, and not forcing it to choose just one of many strings, both GPT-4 and GPT-4o will correctly classify all four tweets with a list of four responses.

jlowin commented 1 month ago

I'm going to close the issue since the colab MRE is resolved. https://github.com/PrefectHQ/ControlFlow/issues/319 is open as a tracking issue for addressing the breaking changes that LangChain 0.3 introduced, thanks again from bringing that to my attention!

jlowin commented 1 month ago

I've updated the documentation to (hopefully) be more clear on the difference between requesting a single label or multiple labels. In the future we can try to detect and optimize the multi-label case the same way we do the single-label classifier (which uses only a single token when possible).

PR: https://github.com/PrefectHQ/ControlFlow/pull/320/files

Live docs: https://controlflow.ai/patterns/task-results#a-list-of-labels

kingabzpro commented 1 month ago

Thank you. Will be working all night to finish the tutorial. Currently, busy in a meeting.

kingabzpro commented 1 month ago

@jlowin I think we have another issue.

here is the link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing#scrollTo=xlCP4aHroOaU

RuntimeError: Timed out while attempting to connect to ephemeral Prefect API server.
jlowin commented 1 month ago

That's a known issue with the ephemeral server the Prefect team is working on (cc @aaazzam), it's challenging to replicate because its stochastic -- essentially for some reason the server just takes a little too long to start so the first request fails. Running a separate server with prefect server start will solve it for now (or retrying with an ephemeral server, which isn't as satisfying a response I know)

kingabzpro commented 1 month ago

Got it. I guess. I wont be using Colab then.

kingabzpro commented 1 month ago

@jlowin I am just sorry, you have to go thought so many of my issues in a day. That's why companies get scared of me. When I touch the product, I break it in so many ways.

jlowin commented 1 month ago

At least with CF you're hitting onboarding issues I want to solve

kingabzpro commented 1 month ago

@jlowin can you help me resolve this issue:

link: https://colab.research.google.com/drive/1dXPRoyw7a5KiHoKIMjQm4eVttycwo3FA?usp=sharing#scrollTo=3XLz5wVbw2u7 the code:

import controlflow as cf

@cf.flow
def analyze_text(text: str):

    # Create a parent task to represent the entire analysis
    with cf.Task(
        "Analyze the given text",
        instructions="Include each subtask result in your result",
        result_type=dict,
        context={"text": text}
    ) as parent_task:

        # Child task 1: Identify key terms
        key_terms = cf.Task(
            "Identify up to 10 key terms in the text",
            result_type=list[str]
        )

        # Child task 2: Summarize (depends on key_terms)
        summary = cf.Task(
            "Summarize the text in one sentence",
            result_type=str,
            depends_on=[key_terms]
        )

    # Run the parent task, which will automatically run all child tasks
    result = parent_task.run()
    return result

# Execute the flow
text = """
    Agentic workflow orchestration refers to the coordination of autonomous
    agents within a structured workflow, allowing them to operate independently
    while achieving a common objective. Unlike traditional workflows that rigidly
    define tasks and dependencies, agentic workflows empower agents—typically
    AI-driven—to make decisions, prioritize tasks, and collaborate dynamically.
    Each agent in this system operates with a degree of autonomy, enabling it to
    adapt to changing conditions, handle uncertainties, and optimize its own
    actions within the broader workflow. This approach enhances flexibility and
    scalability, making it particularly effective for complex, multi-step
    processes where real-time adjustments and intelligent decision-making are
    crucial. By leveraging agents with defined roles and responsibilities, agentic
    workflows maintain structure while enabling innovation and responsiveness in
    task execution.
    """

result = analyze_text(text)
print(result)
16:49:48.656 | ERROR   | Task run 'Tool call: mark_task_0a14d134_successful' - Task run failed with exception: TypeError("Task.create_success_tool.<locals>.succeed() missing 1 required positional argument: 'result'") - Retries are exhausted
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 752, in run_context
    yield self
  File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 1302, in run_task_sync
    engine.call_task_fn(txn)
  File "/usr/local/lib/python3.10/dist-packages/prefect/task_engine.py", line 775, in call_task_fn
    result = call_with_parameters(self.task.fn, parameters)
  File "/usr/local/lib/python3.10/dist-packages/prefect/utilities/callables.py", line 206, in call_with_parameters
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/controlflow/tools/tools.py", line 62, in run
    result = self.fn(**input)
TypeError: Task.create_success_tool.<locals>.succeed() missing 1 required positional argument: 'result'
16:49:48.702 | ERROR   | Task run 'Tool call: mark_task_0a14d134_successful' - Finished in state Failed("Task run encountered an exception TypeError: Task.create_success_tool.<locals>.succeed() missing 1 required positional argument: 'result'"
jlowin commented 1 month ago

So this is actually an OpenAI bug that we noticed last week:

Now I think that your flow actually ran correctly, because what happens is the agent messes up the first call because it passes no args, we tell it that it messed up, and it tries again. So your code block ran properly and result has a value even though GPT-4 messes this up.

BUT nonetheless the tool call failed the first time, and that's the error that's being logged. That's probably a little too aggressive, and we want to tone that back - let me think through the way to control verbosity for transient errors like that. There's a debug setting cf.settings.tools_raise_on_error that would ACTUALLY raise an error there, and an existing one cf.settings.tools_verbose that doesn't attempt to catch this. I'll think on it.

If instead your flow actually raised an error here, instead of just logging an error, that would surprise me -- agents should re-attempt tool calls. The agent does have the ability to declare that the task failed but that would be a different error.

jlowin commented 1 month ago

btw if that's the case this is actually one of my favorite examples for why a workflow framework matters for agents :) they don't always interact with the real world correctly, even within their own API!

kingabzpro commented 1 month ago

@jlowin am I creating a case study for you😅😅🤫🤫

kingabzpro commented 1 month ago

All I wanted was to test few things and write a detail review / tutorial about CF. But this is life. Nothing good come easy.