weaviate / t2v-transformers-models

This is the repo for the container that holds the models for the text2vec-transformers module
BSD 3-Clause "New" or "Revised" License
39 stars 27 forks source link

Direct tokenization #64

Open kl-thamm opened 1 year ago

kl-thamm commented 1 year ago

I had an issue with the t2v-transformers today:

I create embeddings using a sentence-transformers model. One time using the sentence-transformers python library and one time using the t2v-transformers container. The cosine distance of the vectors was up to 0.16.

@antas-marcin greatly and quickly helped me by suggesting setting "T2V_TRANSFORMERS_DIRECT_TOKENIZE=true". This reduced the cosine distance to almost 0.

When looking into what it does i noticed two things:

  1. It's a bit difficult to understand it in the code because "tokenize" has actually two meanings
  2. T2V_TRANSFORMERS_DIRECT_TOKENIZE is not very well documented but could theoretically be very important

Regarding 1: Tokenize in the context of this program means splitting the input into sentences and using the transformers tokenizer. I suggest changing direct_tokenize to shall_split_in_sentences or something similar. Actually shall_embed_sentence_per_sentence might even be more precise but that is a bit verbose. Other suggestions very welcome but its just the general idea. Therefore the environment variable becomes T2V_SHALL_SPLIT_IN_SENTENCES. (see the commit)

Regarding 2: For me this setting seems to be important and should be documented somewhere. I don't know how to suggest edits for the documentation so I am writing down what I think what would be helpful here:

Environment Settings _T2V_SHALL_SPLIT_INSENTENCES: If not set, will use true. If set to false, use raw input.

By default all t2v-transformers split the input into sentences using nltk with english interpunctuation and calculates the mean over the sentence embeddings. This allows to embed inputs of arbitrary length. But it will produce unexpected results if your text does not have the expected interpunctuation. Embedding on a per sentence level could at least theoretically degrade the embedding model's performance in case it produces better results with longer inputs.

(Also could this be significantly slower? Doing it sentence by sentence than doing a larger input at once?).

weaviate-git-bot commented 1 year ago

To avoid any confusion in the future about your contribution to Weaviate, we work with a Contributor License Agreement. If you agree, you can simply add a comment to this PR that you agree with the CLA so that we can merge.

beep boop - the Weaviate bot 👋🤖

PS:
Are you already a member of the Weaviate Slack channel?

antas-marcin commented 1 year ago

@kl-thamm if you want us to be able to merge your PR you need to agree to CLA. Simply replying here in a comment "I agree to CLA" will let us to merge to your PR.

kl-thamm commented 1 year ago

@antas-marcin Thanks! I agree to CLA. The problem is that the smoke test runs fine for me locally with the model that I use and some tests passed here as well. If now after the additional commits I made, the tests fail, I would be unsure as how to proceed :)