Open boumelhaa opened 8 months ago
This is a feature we are planning to add to feast. On a in-mature thought we can add an API that can both index and retrieve the embedding. probably can use Faiss as a stab. Will initiate a RFC for the work.
Yeah, Faiss seems like a promising candidate, although the in-memory aspect can pose scalability and re-indexing challenges. Integrating it with Elasticsearch would enhance its appeal, allowing for the registration of embeddings and direct materialization into ES or any other search API, offering an alternative to traditional online store technologies.
@boumelhaa for the online store with serach functionality, are you using the elasticsearch?
Don't really know much about the subject, but feels like people might want to use different technologies for normal feature lookup and vector search. Is it the right time to also start thinking about (better) supporting configuration of multiple online stores in the same feast project?
Don't really know much about the subject, but feels like people might want to use different technologies for normal feature lookup and vector search. Is it the right time to also start thinking about (better) supporting configuration of multiple online stores in the same feast project?
Not sure how that would help the search use case but yeah definitely a good feature to have.
This is a great topic and something I was actually quite excited about supporting. Glad to see @HaoXuAI already on it! 🚀
@boumelhaa for the online store with serach functionality, are you using the elasticsearch?
I have employed Elasticsearch in an initiative aimed at optimizing vector search and experimenting with its Approximate Nearest Neighbors (ANN) functionality. Locally, it demonstrated superb speed in searching using cosine similarity. Additionally, ES can be readily managed by AWS (open search) and other cloud providers.
From my perspective, if we opt for ES or any other search technologie, we'll need to abstract away all the tedious aspects of indexing the vectors and searching through them, as well as implementing methods to retrieve the k most similar vectors using multiple algorithms (brute force, ANN, etc.) just like how faiss does it.
In the feature_store.yaml, I believe it would be prudent to distinguish it from both the offline and online stores, optionally adding it on top of them, as not all use cases will require search functionality
Just want to close this out, we ended up implementing ElastiSearch for the Feast VectorDB work. @boumelhaa did you see this?
This is not an issue. We are currently working on developing a scalable architecture for our ranking system using Feast. As a backend, we are utilizing GCP for the offline store and Redis for the online store in AWS, which is in close proximity to our serving environment.
Feast effectively abstracts the feature vectors for classical models or batch inference. However, the complexity arises when we integrate embeddings into our recommendation system.
While Feast serves well for training the embeddings model and encoding the embeddings in offline batches, the challenge lies in serving these embeddings. My question pertains to how a vector search solution fits into our architecture. Where should the embeddings reside, and do we need to register them initially?
In essence, considering our two-tower ranking model where the first tower's embeddings are encoded offline and the second tower's embeddings are encoded upon request, followed by a search against the pre-encoded embeddings, how can we structure this using Feast?