I've come across some papers that approach the same task as MUSE (aligning word embeddings in two languages), except that they start with a small vocabulary of aligned words and try to bootstrap the rest of the alignment from that.
Artetxe et al. (2017)
Smith et al. (2017)
This is could be relevant for Voynich because the unsupervised task is probably much less data-hungry than the task MUSE was approaching, which is meant for large-scale datasets for information retrieval systems.
I've come across some papers that approach the same task as MUSE (aligning word embeddings in two languages), except that they start with a small vocabulary of aligned words and try to bootstrap the rest of the alignment from that.
This is could be relevant for Voynich because the unsupervised task is probably much less data-hungry than the task MUSE was approaching, which is meant for large-scale datasets for information retrieval systems.