crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
21.8k stars 3.02k forks source link

Question about CrewAI compatibility with LangChain caching #886

Open ItouTerukazu opened 4 months ago

ItouTerukazu commented 4 months ago

Description: I'm using CrewAI with LangChain and have noticed that the LangChain caching mechanism doesn't seem to be working as expected when used with CrewAI. I have the following questions:

  1. Is CrewAI designed to be compatible with LangChain's caching mechanism?
  2. Are there any known issues or limitations when using LangChain caching with CrewAI?
  3. Are there any specific configurations or settings required to enable effective caching when using CrewAI?
  4. If caching is not currently supported, are there plans to implement this feature in future versions?

Environment:

Steps to reproduce:

  1. Set up LangChain caching using set_llm_cache(InMemoryCache())
  2. Create a CrewAI instance with agents and tasks
  3. Run the same crew operation twice
  4. Observe that the second run does not use the cache and makes a new LLM call

Expected behavior: The second run should use the cached results from the first run, significantly reducing execution time and avoiding a new LLM call.

Actual behavior: Both runs make separate LLM calls and take similar amounts of time to execute.

Any insights or guidance on this matter would be greatly appreciated. Thank you for your time and assistance!

github-actions[bot] commented 3 months ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.