embeddings-benchmark / mteb

MTEB: Massive Text Embedding Benchmark
https://arxiv.org/abs/2210.07316
Apache License 2.0
1.93k stars 264 forks source link

Be able to cache embeddings and load them #946

Open orionw opened 4 months ago

orionw commented 4 months ago

For most users, being able to cache their embedded docs and/or provide a cached embedding file is probably overkill.

However, there are many situations where it would be helpful to have an option to cache them. For example, experiments where you alter the query/document set to for speedups (as I'm doing now) or if you're testing the effect of different prefixes/instructions over the same dataset.

I typically use pyserini for caching the index so that we can quickly search over it later, but that doesn't integrate nicely with mteb. I think it would fairly straightforward to implement this: (1) we need to take in a flag of whether to cache the embeddings, cache them to a file that corresponds to the dataset and model name and (2) provide an option to read in a cached embedding file.

I don't have bandwidth for this right now, but if anyone does it would be an excellent addition.

tenzu15 commented 4 months ago

Hey @orionw ,

I would like to try this if possible!

orionw commented 4 months ago

Awesome @tenzu15! It would be great to be able to pass the two flags in the mteb.run command. Something like cache_embeddings: bool = True and cached_embedding_file: str

This would need to be changed in the RetrievalEvaluator class for now. If it's useful for other tasks, we can implement it there also. Also cc'ing @KennethEnevoldsen who may have opinions on where this should be added/what the names should be.

But feel free to start @tenzu15. If you have any questions feel free to make a draft PR and cc me!

KennethEnevoldsen commented 4 months ago

@orionw, wouldn't it be better to implement a more general model wrapping for this so that it works for all tasks?

class ModelWrap():
  def __init__(self, model):
     self.model = model

  def encode(...):
     embeddings = self.model.encode(...)
     self.store_embeddings(sentences, embeddings)   
     return embeddings
isaac-chung commented 4 months ago

There's some background discussion related to the topic from https://github.com/embeddings-benchmark/mteb/issues/354#issuecomment-2055435397 as well.

orionw commented 4 months ago

+1 @KennethEnevoldsen, I think a wrapper is a great idea and even simpler to implement.

KennethEnevoldsen commented 1 month ago

It sounds like we settled on a wrapper here. In which case I don't think it is something that should be within mteb. Let me know if you disagree, then I will re-open the issue

orionw commented 1 month ago

Personally, I think it’d be nice to have it be full functionality in MTEB so you can cache things. Maybe it’s just my research but not having to recompute the embeddings would save a lot of time and I frequently store them with Pyserini instead. If this was in MTEB it would also allow us to put the indexes on HF so people can just grab and use it.

If no one else finds it useful we can leave it unimplemented but I personally would find it very useful.

KennethEnevoldsen commented 1 month ago

Will leave this open then.

Def. think public caches are important - The Scandinavian embedding benchmark implements it for results. Is there a reason why such an approach would not work here? debugging, error analysis I presume?

orionw commented 1 month ago

Thanks! I wasn't aware of the Scandinavian embedding benchmark cache - do you mind linking?

KennethEnevoldsen commented 1 month ago

cache for results is here:

https://github.com/KennethEnevoldsen/scandinavian-embedding-benchmark/tree/main/src/seb/cache

It is simply implemented as a part of the package. If you try to rerun an already run model it will simply use the cache.

orionw commented 1 month ago

These are the cached results right? I don't see any embeddings but maybe missed them

KennethEnevoldsen commented 1 month ago

Yeah only results so no embeddings