-
Description:
I'm using CrewAI with LangChain and have noticed that the LangChain caching mechanism doesn't seem to be working as expected when used with CrewAI. I have the following questions:
1. …
-
Hi, script is becoming a separate valuable system now. Great!! I just found issue. I have Ollama server as separate instance. Its accessible in default port but during crew creation it turns out its l…
-
I'm running crewAI in a Docker container on an AWS Lambda Function, where the only writable directory path is `/tmp`.
Since the update of crewAI, I'm now getting the following error.
`OSError: [E…
-
28.38 INFO: pip is looking at multiple versions of chromadb to determine which version is compatible with other requirements. This could take a while.
28.38 Collecting chromadb=0.4.22 (from crewai-to…
-
# Attempt to initialize query_tool
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Knowledge Graph Tool",
description="Use th…
-
I suspect this is a _pydantic_ bug, and not a primary _crewai_ issue; however since _crewai_ is tightly-bound to _pydantic_ it is worth discussing here.
My bug report to _Pydantic_ comes from code …
-
### Description
I'm currently working on a project where I'm using Crew AI agents with chunking and context retention. As part of this process, I'm attempting to implement long-term memory for the ag…
-
I understand that the crewai code makes several calls using the openAI key, but I didn't expect to generate so many tokens running this a 4-5 times (623k). Is ths expected? Or did I do something wro…
-
File "/usr/local/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/crewai/crew.py", line 192, in kickoff
return self._run_sequential_process()
File "/usr/loca…
-
Even though code generation is the looked best way to keep the tool useful and independant to runtimes other than AutoGen, I've seen the problems such as the efficiency and observability issues. There…