codefuse-ai / ModelCache

A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
Other
889 stars 44 forks source link

Can ModelChat be used in FastChat? #43

Open 3togo opened 5 months ago

3togo commented 5 months ago

I am looking for a mean to use ModelChat in FastChat to speed up the LLM processes. Any pointer?

peng3307165 commented 2 months ago

I think the integration is possible. You can refer to the Service-Access section in the readme. ModelCache provides data write and query interfaces. To better manage the cache, you need to implement a ModelCache Adapter module in FastChat (refer to the left side of the modules diagram in the readme) for cache management. In our future work, we will also add a ModelCache Adapter module to facilitate users in quickly integrating Model Cache into their own LLM Chat products, you can continue to follow our work. Thank you for your attention,best wishes!