Closed SherryShen9 closed 1 year ago
Hi @syl007, Thanks for your question, most of our code for extracting contextual embeddings is borrowed from HuggingFace Transformers. You can also refer to “https://github.com/joker-xii/simalign/blob/master/simalign/simalign.py” lines 117 ~ 140 and lines 195 ~ 215 to extract the contextual embedding of each word. Due to my busy work, I will release my code later.
Hi @zjpbinary, Thanks for your reply. According to your hints, I have successfully extracted contextual embeddings.
Hi @zjpbinary,
Thanks for the awesome work. I try to reproduce the results in the paper but do not find the code on XLM-R embedding extraction. Could you please also publicate the code on how to extract them if possible?