A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology tags, lemmas, and dependency trees.
Is there an example config somewhere showing how to fine-tune on a specific treebank using BERT weights saved from fine-tuning on all UD treebanks combined (using the saved pretrained models)? Corresponding to the UDify+Lang experiments in table 2 in the paper.
Is there an example config somewhere showing how to fine-tune on a specific treebank using BERT weights saved from fine-tuning on all UD treebanks combined (using the saved pretrained models)? Corresponding to the UDify+Lang experiments in table 2 in the paper.