Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Description:
I'm using CrewAI with LangChain and have noticed that the LangChain caching mechanism doesn't seem to be working as expected when used with CrewAI. I have the following questions:
Is CrewAI designed to be compatible with LangChain's caching mechanism?
Are there any known issues or limitations when using LangChain caching with CrewAI?
Are there any specific configurations or settings required to enable effective caching when using CrewAI?
If caching is not currently supported, are there plans to implement this feature in future versions?
Description: I'm using CrewAI with LangChain and have noticed that the LangChain caching mechanism doesn't seem to be working as expected when used with CrewAI. I have the following questions:
Environment:
Steps to reproduce:
set_llm_cache(InMemoryCache())
Expected behavior: The second run should use the cached results from the first run, significantly reducing execution time and avoiding a new LLM call.
Actual behavior: Both runs make separate LLM calls and take similar amounts of time to execute.
Any insights or guidance on this matter would be greatly appreciated. Thank you for your time and assistance!