microsoft / DeBERTa

The implementation of DeBERTa
MIT License
1.99k stars 228 forks source link

Pretraining the deberta-v3 by larger context length. #153

Open sherlcok314159 opened 4 months ago

sherlcok314159 commented 4 months ago

Hi! I find that Deberta-v3 uses relative-position embedding so that it can takes in larger context compared to traditional BERT. Have you tried to pretrain deberta-v3 by 1024 or larger?

If I need to pretrain deberta-v3 from the scratch using a larger context length (e.g., 1024), are there any modification I should make besides the training script?

Thanks for any kind help!

sileod commented 2 months ago

Hi, I did a multi-task fine-tune with 1280 context length (1680 for small version) https://huggingface.co/tasksource/deberta-base-long-nli

sherlcok314159 commented 1 month ago

Could you please open-source your code for learn?