Open rtanaka-lab opened 3 years ago
As far as I know this is already a feature of Bert. That's possibly the reason why you haven't seen it highlighted
Thank you for your reply. I have already understood LayoutLM model was initialized from BERT. If LayoutLM did not use 1D position embedding when pre-training on IIT-CDIP dataset, I am wondering if causing the forgetting of the 1D position information.
Did LayoutLM learn 1D position embedding during pre-training? The LayoutLM paper did not describe about it, but the official code contains 1D position embedding.