EmergenceAI / Agent-E

Agent driven automation starting with the web. Discord: https://discord.gg/wgNfmFuqJF
MIT License
345 stars 46 forks source link

[BUG] Too Many Requests #30

Open nasko94 opened 1 month ago

nasko94 commented 1 month ago

Title

Too Many Requests

Description

Whenever I ask it to do almost anything on a site, that requires it get the data from the site it returns [2024-05-12 20:02:43,316] INFO {_client.py:1026} - HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 429 Too Many Requests". This is with llama3-70b from Groq.

Steps to Reproduce

  1. Go to https://www.rainbowshops.com/products/blue-linen-sleeveless-romper-1408077211482
  2. click a size variant

Expected Behavior

Clicking on any size variant

Actual Behavior

Printing in the console the line above

Environment

Additional Context

I have no idea.

teaxio commented 1 month ago

@nasko94 so you are saying a command like: go to https://www.rainbowshops.com/products/blue-linen-sleeveless-romper-1408077211482 and click on the small size right?

teaxio commented 1 month ago

I tried the example above and here is what I see: llama3 does not do well with DOM analysis. It literally makes up a query selector [mmid='size-small'] (you can see this in the chat logs ae/log_files/chat_messages.json) So this results in multiple calls back to Groq API. My guess is that because of how fast we try, Groq sounds off an alarm indicating too many requests per minute (rate limiting). For the free tier, Groq limit is 6k tokens per minute. Because llama3 does not figure out the right html element to interact with from the smaller DOM, Agent-E reverts to using a large version of the DOM. I am pretty sure between all that, there are more than 6K tokens sent per minute :)