Open ghost opened 4 years ago
Hello~~I am curious about whether we should append another FC layers or dropout layer after the sequence outputs of bert for the layoutLM pre-training?
@sunshine9409 Can you please elaborate that in details?
That is, what does the head (sub-net) look like for the pretraining?
Hello~~I am curious about whether we should append another FC layers or dropout layer after the sequence outputs of bert for the layoutLM pre-training?