ThomasVitale / llm-apps-java-spring-ai

Samples showing how to build Java applications powered by Generative AI and LLMs using Spring AI and Spring Boot.
Apache License 2.0
179 stars 32 forks source link

is Semantic search example similar to project GPTcache #1

Open oneanime opened 1 week ago

ThomasVitale commented 1 week ago

The underlying techniques are the same, but the goal is different. GPTcache provides a cache between an application and an LLM service. It uses the same principles of the semantic search example in this repo in order to generate embeddings for every user request to the model and store it in a vector database. If a new request is sent, then GPTcache performs a semantic search on the vector database to check if a similar request has been sent before. If it finds a result, then it will return the cached response instead of calling the LLM service again.

Does that answer your question?