Closed thomassajot closed 2 years ago
Hi @thomassajot
This is normal.
embeddings.position_ids
, embeddings.position_embeddings
are to give information about word order.
spade
does not use it and instead use xy-coordinate of each word on the image (it is a serializer-free)
However if you want to use it, you can simply set
to
- seqPos
Also, for the absence of layer5--layer11, this is due to the use of small model that. You can set
to
encoder_config_name: bert-base-multilingual-cased-12layers
You may need to make new config under data/model/backbones/bert-base-multilingual-cased-12layers
based on data/model/backbones/bert-base-multilingual-cased-5layers
(simply copy and change the number of layers in the config file)`
Happy coding!
Wonseok
Great. Thank you for the quick reply.
I will code happily.
Thank you for sharing the code !
I am experiencing some issues with the model weight update:
Would you have any advice on how to resolve this issue ?
I am using the following conda env: