Hello
I am trying to finetune LUKE on the DOCRED dataset, which is a document-level dataset for relation extraction. I see that the max length of the sequence with LUKE can be 512 tokens. Is there any other workaround other than truncation of the sample for this problem?
Hello I am trying to finetune LUKE on the DOCRED dataset, which is a document-level dataset for relation extraction. I see that the max length of the sequence with LUKE can be 512 tokens. Is there any other workaround other than truncation of the sample for this problem?