Hi.
I noticed that when the input text sequence, truncation is performed to reduce the sequence to 77 tokens. However no EOT token is added at the end?
For example, in the case of a short text, I have the following tokenization with the EOT= 49407 as last token.
Is this something intended? If so, what is the reasoning behind it?
I also noticed that I get the same embedding values for different text sequences that are bigger > 77 even though after tokenization I see different tokens being generated (but no EOT)....
Hi. I noticed that when the input text sequence, truncation is performed to reduce the sequence to 77 tokens. However no EOT token is added at the end?
For example, in the case of a short text, I have the following tokenization with the EOT= 49407 as last token.
But with longer sequences, I do not see any EOT=49407 added.
Is this something intended? If so, what is the reasoning behind it?
I also noticed that I get the same embedding values for different text sequences that are bigger > 77 even though after tokenization I see different tokens being generated (but no EOT)....
Also, from my understanding (please correct me if I am wrong) ImageBind uses CLIP. However in the CLIP implementation the EOT is added when truncating a long sequence: https://github.com/openai/CLIP/blob/a1d071733d7111c9c014f024669f959182114e33/clip/clip.py#L239C1-L240C39
Any idea on what I am doing wrong?
Thanks.