codefuse-ai / ModelCache

A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
Other
892 stars 44 forks source link

[编程挑战季] Fast API 接口能力 #53

Open peng3307165 opened 1 month ago

peng3307165 commented 1 month ago
  1. 问题简述: Web服务框架转换,当前ModelCache提供的是flask web服务,希望将其转换为更通用的Fast API服务框架
  2. 期望产出: flask4modelcache.py、flask4modelcache_demo.py、flask4multicache.py、flask4multicache_demo.py 4个文件转换为Fast API服务,可以在测试example上运行。
  3. 能力要求: 了解Fast API服务框架
charleschile commented 1 week ago

Please assign this to me, I will change the frame to Fast API.

charleschile commented 6 days ago

PR #58