stanford-futuredata / ColBERT

ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)
MIT License
2.67k stars 355 forks source link

Fix tokenization for query marker #351

Open sleep-ee opened 3 weeks ago

sleep-ee commented 3 weeks ago

Description This pull request addresses the issue of inconsistent tokenization when prepending a query marker to the input text. The current implementation appends ". " at the beginning of each string and then replaces it with a query marker. This approach can lead to inconsistent behavior because different tokenizers might handle the punctuation and spaces differently, resulting in superfluous IDs within the tokenized output.

To resolve this issue, the changes ensure that the query marker is directly added to the beginning of each string before tokenization. This method avoids the potential inconsistency by ensuring that the query marker is always tokenized as a single token.

Changes Modified the tensorize method in doc_tokenizer.py and query_tokenizer.py to prepend the query marker directly to each string in batch_text. Included a utility to test if the query marker tokenizes into a single token, ensuring consistency across different tokenizers.

Related Issues Fixes issue #346 .

NohTow commented 1 week ago

The problem with this code is that the marker tokens are not tokenized correctly. For the ColBERTv2 model, the query/doc tokens are respectively [unused0] and [unused1], with corresponding ids of 2 and 3.

With this code, the resulting [CLS] + marker results in [101, 1031, 15171, 2487, 1033], i.e [CLS] [ unused1 ] instead of [101, 2], i.e [CLS] [unused1].