konveyor / kai

Konveyor AI - static code analysis driven migration to new targets via Generative AI
Apache License 2.0
23 stars 28 forks source link

Google Gemini: LLM requests are not being cached #315

Open jwmatthews opened 3 weeks ago

jwmatthews commented 3 weeks ago

I am testing #307 and confirmed that while I see successful requests being sent back to run_demo.py, I am not seeing data being written to disk for cached responses.

I ran with: DEMO_MODE being set to True and also with the default of it being set to False.

Note I also saw an odd message in logs about a grpc warning #313 unsure if that has any relation.