Firstly, thank you for the incredible work on the Multilingual-CLIP model. We have been using it and it is great!
However, we've encountered an issue when input text queries exceed 512 tokens. Here is error message:
"Token indices sequence length is longer than the specified maximum sequence length for this model (514 > 512). Running this sequence through the model will result in indexing errors."
I wonder if you've considered passing truncation=True in the tokenizer, MultilingualCLIP forward method line 16 here. This change would fix the issue when the text query exceeds the token limit. Thanks!
Firstly, thank you for the incredible work on the Multilingual-CLIP model. We have been using it and it is great!
However, we've encountered an issue when input text queries exceed 512 tokens. Here is error message:
"Token indices sequence length is longer than the specified maximum sequence length for this model (514 > 512). Running this sequence through the model will result in indexing errors."
I wonder if you've considered passing truncation=True in the tokenizer, MultilingualCLIP forward method line 16 here. This change would fix the issue when the text query exceeds the token limit. Thanks!