Closed graceyangfan closed 3 years ago
hello, I'm reading the original codes, I find that the lengths of encodings is different on different samples.The bert model need a constant sentence length input,I want to know where there is a padding or truncating on enxodings
Hi, we pad the sequences. The collate_fn function (passed to the DataLoader) takes care of padding.
hello, I'm reading the original codes, I find that the lengths of encodings is different on different samples.The bert model need a constant sentence length input,I want to know where there is a padding or truncating on enxodings