This PR generates reproducible uuids off of the resource readable id for vector points stored in Qdrant. What this lets us do, is directly reference and check for existing embeddings in Qdrant if we have a learning resource or content file. Currently for vector similarity, the endpoint unnecessarily re-embeds the referenced document even though the embeddings for that already exist in qdrant (causes a slight delay when loading /api/v1/learning_resources/181/vector_similar/) - this is resolved in this PR since we can re-use the existing embedding
How can this be tested?
Checkout main and make sure you have learning resources locally
clear existing collections and generate the embeddings via python manage.py generate_embeddings --all --skip-contentfiles
find some learning resource id and load the vector similarity endpoint /api/v1/learning_resources/{resource id}/vector_similar/- note the delay in loading
Checkout this branch
make sure you have learning resources locally
clear existing collections and generate the embeddings via python manage.py generate_embeddings --all --skip-contentfiles
find some learning resource and load the vector similarity endpoint /api/v1/learning_resources/{resource id}/vector_similar/ - note how much faster it loads
Additional Context
We generate the uuid off of the resource "readable_id" instead of the "id" so that if we had some "master embeddings" snapshot - it can be instantly re-used in any environment.
What are the relevant tickets?
Closes https://github.com/mitodl/hq/issues/6094
Description (What does it do?)
This PR generates reproducible uuids off of the resource readable id for vector points stored in Qdrant. What this lets us do, is directly reference and check for existing embeddings in Qdrant if we have a learning resource or content file. Currently for vector similarity, the endpoint unnecessarily re-embeds the referenced document even though the embeddings for that already exist in qdrant (causes a slight delay when loading /api/v1/learning_resources/181/vector_similar/) - this is resolved in this PR since we can re-use the existing embedding
How can this be tested?
python manage.py generate_embeddings --all --skip-contentfiles
/api/v1/learning_resources/{resource id}/vector_similar/
- note the delay in loadingpython manage.py generate_embeddings --all --skip-contentfiles
/api/v1/learning_resources/{resource id}/vector_similar/
- note how much faster it loadsAdditional Context