codefuse-ai / ModelCache

A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
Other
780 stars 40 forks source link

Correct the prefix issue with index and remove redundant comments. #32

Closed peng3307165 closed 4 months ago