Token indices sequence length is longer than the specified maximum sequence length for this model (655 > 512). Running this sequence through the model will result in indexing errors
[{'summary_text': 'The Pembina Trail was a 19th century trail used by Métis and European settlers to travel between Fort Garry and Fort Pemmbina in what is now the Canadian province of Manitoba and U.S. state of North Dakota. It was part of the larger Red River Trail network and is now a new version of it is now called the Lord Selkirk and Pembinea Highways in Manitoba. It is important because it allowed people to travel to and from the Red River for social or political reasons.'}]
But Why in above it saying the maximum sequence length for this model is 512 while initially I set it to 1024?
Following this tutorial, Abstractive Summarization with Hugging Face Transformers I created a text summarization ml model by fine-tuning t5-small with a custom dataset setting
MAX_INPUT_LENGTH = 1024
.But if I try the model like this
This is the result I got
But Why in above it saying the
maximum sequence length
for this model is 512 while initially I set it to 1024?