First of all, thank you very much for sharing this excellent dependency parser. I am using your parser as part of my MSc thesis on improving dependency parsing of Norwegian language using pre-trained word embeddings, and I have a question regarding lstm-parser's handling of pre-trained embeddings which I cannot find the answer to in your article.
From my understanding, there are several different approaches to using pre-trained word embeddings in dependency parsers. UDPipe/Parsito has the option of either using the embeddings directly as feature vectors for each word in the vocabulary, or using the pre-trained embeddings as a basis for the training of internal form-embeddings used by the parser. This raises the question: how does lstm-parser utilise the pre-trained embeddings internally?
Kind regards,
Henrik H. Løvold
LTG group at Uni. of Oslo
Hi,
First of all, thank you very much for sharing this excellent dependency parser. I am using your parser as part of my MSc thesis on improving dependency parsing of Norwegian language using pre-trained word embeddings, and I have a question regarding lstm-parser's handling of pre-trained embeddings which I cannot find the answer to in your article.
From my understanding, there are several different approaches to using pre-trained word embeddings in dependency parsers. UDPipe/Parsito has the option of either using the embeddings directly as feature vectors for each word in the vocabulary, or using the pre-trained embeddings as a basis for the training of internal form-embeddings used by the parser. This raises the question: how does lstm-parser utilise the pre-trained embeddings internally?
Kind regards, Henrik H. Løvold LTG group at Uni. of Oslo