Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
4.11k
stars
727
forks
source link
TRUNCATE token less than 512 #1574
Open
khaddeep opened 5 months ago
Describe the bug Specified max sequence to 512 but still it is truncating while prediction
To Reproduce Take a paragraph longer than 128 tokens
Expected behavior It should not truncate if max sequence is specified to 512
Screenshots
Desktop (please complete the following information):
Additional context