Closed hyeinhyun closed 3 months ago
Hi, LiLT aims to extract language-independent layout knowledge from pre-training and then uses it in fine-tuning with RoBERTa-like models of any language(s). You can try to directly combine LiLT-only-base and Korean Roberta-base for fine-tuning without any re-pretraining.
Hi :) I'm confused about pretrain process when I change language model.
I hope to use LiLT using korean Roberta model which is already pretrained with Korean language dataset. According to paper, I need to re-pretrain Korean Roberta model with Layout embedding vector. Is it right? ++ I think I need to re-pretrain lilt-only-base model because of CAI pretrain task..