Open simonw opened 7 months ago
There isn't a -o
mechanism for embedding models at the moment.
I think the best solution is to add the concept of embedding "modes" to LLM core. I like the term mode more than "task types", but I should check and see if there is widely accepted terminology for this already.
Currently we don't pass a
task_type
- the API docs at https://docs.nomic.ai/reference/endpoints/nomic-embed-text say this:How can we support these? A few options:
-o
optionnomic-embed-text-v1.5-512-clustering
etcThe second option is a bad idea. It would result in 4x the number of models, but it's also bad because the point of search types is that you CAN compare
search_document
withsearch_query
- LLM currently enforces that embeddings can only be compared if they belong to the same model.The first would work as a short-term fix.
The third idea is most interesting. These are not the only embeddings that differentiate between search and document - it's a really useful concept for implementing RAG. See also E5-large-v2 (thought that one works by including magic prefixes on the strings to be embedded, e.g.
"query: question here"
).