What is this paper about?
This paper is about using BERT embeddings to model lexical semantic change. They cluster token embeddings into "usage types" (which we could interpret as senses) and then define a probability distributions of senses over time
Is it relevant to our project? If so, why and how?
Yes, because it's about how word meaning changes over time.
What could we use from this work in our project?
We could build token embeddings from our corpus, cluster them and then see how the sense distributions changed over time
Analysing Lexical Semantic Change with Contextualised Word Representations Mario Giulianelli, Marco Del Tredici, Raquel Fernández
https://www.aclweb.org/anthology/2020.acl-main.365/