frdel / agent-zero

Agent Zero AI framework
Other
4.66k stars 1.03k forks source link

Stacktrace in memorize_solutions.py #204

Open jonny7737 opened 4 days ago

jonny7737 commented 4 days ago

I asked for a video transcript for a 10 minute youtube video. After figuring out how to ask for the transcript, I get a partial response (stops @ about 1:56) and this stacktrace in the launch terminal:

**[01:59No relevant docs were retrieved using the relevance score threshold 0.9
Task exception was never retrieved
future: <Task finished name='Task-41380' coro=<MemorizeSolutions.memorize() done, defined at /Users/xxx/Desktop/AI Dev/agent-zero/python/extensions/monologue_end/_51_memorize_solutions.py:30> exception=TypeError("'int' object is not subscriptable")>
Traceback (most recent call last):
  File "/Users/xxx/Desktop/AI Dev/agent-zero/python/extensions/monologue_end/_51_memorize_solutions.py", line 63, in memorize
    txt = f"# Problem\n {solution['problem']}\n# Solution\n {solution['solution']}"
                         ~~~~~~~~^^^^^^^^^^^
TypeError: 'int' object is not subscriptable
FarVision2 commented 4 days ago

oof why bother diagnosing yourself?

get Cursor.sh

load https://marketplace.visualstudio.com/items?itemName=saoudrizwan.claude-dev

Drop a few bucks into the OpenAI API or OpenRouter or Google Gemini

I like to put it on the right tab. I don't use any of the native Cursor items any longer.

gpt-4o-mini is wonderful for diagnostics. OpenAI has native prompt caching. OpenRouter does not. But it's so cheap it doesn't really matter.

ANY error just hit the red dot and click Copy Command And Output. Paste into the Cline chat window and hit enter. Let it figure it out. Works great for changing anything else you want. In another agentic product that used only OpenAI (a personal pet peeve) I had it integrate the Google API ecosystem entirely - Vertex API, Generative API, and Embedding. Works great.

Also... take a look at https://artificialanalysis.ai/models

Jan figured it out based on screenshots :) the new Gemini sept 24

( chat_llm = models.get_google_chat(model_name="gemini-1.5-flash-002", temperature=0))

It is amazingly performant. Been running nonstop yesterday and today and spent something like .25c

It does not work well for Cline diagnostics. The tooling prompts don't match. It does work well for agentic processing. It is the best deal running right now hands down.

jonny7737 commented 4 days ago

Thanks but I have spent decades fixing other peoples code. No time for it now.

Fix it. Don't fix it. I don't care. At least the issue has been raised.

Good luck my friend.

FarVision2 commented 4 days ago
        solutions_txt = ""
        rem = []
        for solution in solutions:
            # solution to plain text:
            txt = f"# Problem\n {solution['problem']}\n# Solution\n {solution['solution']}"
            solutions_txt += txt + "\n\n"

Well.... It looks like a memorized solution that it has internally didn't work properly. Not sure how this is a code base issue, I would just try something different.