Open quroom opened 1 week ago
You’re right. I’ll think about a solution for this, hopefully without adding dependencies. If you already have something, I’d be happy to review the PR
You’re right. I’ll think about a solution for this, hopefully without adding dependencies. If you already have something, I’d be happy to review the PR
Let's think about it! If I have effective idea, I will let you know. 👍
I’m considering using memoization for the function that retrieves embeddings for a given text. This approach would be simple and efficient, although it wouldn’t persist across restarts.
from functools import lru_cache
import numpy as np
@lru_cache(maxsize=1000)
def get_embeddings(text):
# Simulated embedding retrieval logic using numpy random values
return np.random.randn(128) # Example 128-dimensional embedding vector
# Usage
embedding1 = get_embeddings("example text")
embedding2 = get_embeddings("example text") # Cached result used
This example uses lru_cache to cache embeddings in memory, optimizing retrieval for repeated calls within the same session. Note that the cache won’t persist across restarts.
I’m considering using memoization for the function that retrieves embeddings for a given text. This approach would be simple and efficient, although it wouldn’t persist across restarts.
from functools import lru_cache import numpy as np @lru_cache(maxsize=1000) def get_embeddings(text): # Simulated embedding retrieval logic using numpy random values return np.random.randn(128) # Example 128-dimensional embedding vector # Usage embedding1 = get_embeddings("example text") embedding2 = get_embeddings("example text") # Cached result used
This example uses lru_cache to cache embeddings in memory, optimizing retrieval for repeated calls within the same session. Note that the cache won’t persist across restarts.
Even though it has cons, It could be easiest solution :)
https://github.com/pkavumba/django-vectordb/blob/085f0c8b87366c52d342ea06467f469bd9c1b573/vectordb/queryset.py#L153
Looks like create new embeddings even if search same text. Why don't you save search text in some where?
Are you planning to support caching or something for search text?