lavis-nlp / spert

PyTorch code for SpERT: Span-based Entity and Relation Transformer
MIT License
692 stars 148 forks source link

padding or truncating on scentence? #39

Closed graceyangfan closed 3 years ago

graceyangfan commented 3 years ago

hello, I'm reading the original codes, I find that the lengths of encodings is different on different samples.The bert model need a constant sentence length input,I want to know where there is a padding or truncating on enxodings

markus-eberts commented 3 years ago

Hi, we pad the sequences. The collate_fn function (passed to the DataLoader) takes care of padding.