codefuse-ai / ModelCache

A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
Other
889 stars 44 forks source link

Correct the prefix issue with index and remove redundant comments. #32

Closed peng3307165 closed 6 months ago