-
I am trying to run the below code but somehow I am getting " Error code: 401 - {'error': {'message': 'Incorrect API key provided"
Could anyone help me with this?
from crewai import A…
-
### System Info
**OS version**: MacOS Sequoia 15.0
My *pyproject.toml*
```
[project]
name = "pandasai-benchmark"
version = "0.1.0"
description = "Add your description here"
readme = "READM…
-
[GIN] 2024/09/16 - 13:05:29 | 403 | 39.122µs | 127.0.0.1 | POST "/api/embeddings"
9/16/2024, 12:54:28 PM [CONVEX A(aiTown/agentOperations:agentGenerateMessage)] [LOG] 'Sending data for…
-
I admire your contributions @ggerganov and hopefully this project will be a hit as well.
What about a Pair/Mob programming mode, where LLM agent takes over as the driver?
-
### System Info
Windows 11
Python 3.11.4
pandasai 2.0.24
### 🐛 Describe the bug
I was using PandasAI, and it was working perfect. But , out of the blue it stops working, and it's not working anym…
-
{"message": "Error in _stream_synthesis_task\nTraceback (most recent call last):\n File \"/root/pythonenv/enve/lib/python3.10/site-packages/livekit/agents/utils/log.py\", line 16, in async_fn_logs\n …
-
### Describe the bug
What in the world is this? My code was working just fine until today. I tried to register a function with an assistant agent using .register_for_llm().
Full error:
Erro…
-
### Describe the bug
When max_tokens parameter is None, the agent send a frame /v1/chat/completions with max_tokens: null.
In this case the LLM don't understand and and stop after the second tok…
-
It would be great to have an example of how to integrate this with llama. Its mentioned in the docs but there is no example on how to use it with llama (or ollama)
-
From the sample code, I can trace the following execution logic. From the process perspective, it will obtain the current result and the next action through LLMs execution, and continue to execute the…