Hyperparticle / LemmaTag

A neural network that jointly part-of-speech tags and lemmatizes sentences, boosting accuracy for morphologically-rich languages (Czech, Arabic, etc.)
https://arxiv.org/abs/1808.03703
MIT License
34 stars 3 forks source link

Clarification needed on RNN decoder #3

Closed HaukurPall closed 4 days ago

HaukurPall commented 3 years ago

I very much enjoyed reading your paper and I am trying similar things for Icelandic PoS and lemmatization.

The performance of my model for lemmatization is not what I would expect. I tried implementing a simlar RNN decoder for lemmatization as you describe in the paper (in PyTorch) but putting the pieces together from the referenced papers (Bahdanau et al., 2014 and Luong et al. 2015) proved to be difficult. Therefore, I turned to the code. Thank you for releasing it!

I am not that used to TensorFlow but from what I understand is that the input to the RNN decoder is the word_rnn_outputs (O^w_i in paper), tag_feats (T_i), word_cle_states and attention over word_cle_outputs (e^c_{i...}). Along with the previous predicted character, which I assume TF handles for you as it is not clear from the code.

There are a few things not clear to me, even from reading the code.

  1. What does word_cle_states stand for? According to the paper it should be e^w_i (summed last state of character RNN to a word embedding) as it is part of the input to the RNN decoder, but from the code it seems that it is simply the last state of the character RNN. Is this correct?
  2. The inital hidden state of the RNN decoder is said to be O^w_i in the paper, but it seems to be also this mysterious word_cle_states. Am I correct in understanding that the inital hidden state of the RNN decoder is last state of the character RNN, which can also make sense.
  3. Is the previous hidden state of the RNN decoder used to calculate the multiplicate attention (called "dot" in Luong et al.) or is the attention done after calculating the next hidden state, and then fed to the to decoder_layer?
  4. Is the output of the RNN decoder simply mapped linearly to a dimension of the correct size to predict characters (decoder_layer)?

I would be very grateful for any answer you have!

Hyperparticle commented 3 years ago

Sorry for the late reply. I might have to come back to you on answering the specific questions, as it's been a long time since I touched this code.

As for lemmatization, is there a specific reason to use RNN decoding? I have another repo that does lemmatization using the edit script method which is nearly as good in terms of accuracy, and decodes faster. See here: https://github.com/Hyperparticle/neural-lemmatizer-allennlp

This code can be easily modified to also output POS tags, as long as you understand the basics of allennlp.

foxik commented 3 years ago

@Hyperparticle As for the need of an RNN decoding, we have been evaluating lemmatizer on user noisy data (with typos, missing diacritics and such), and in such context the RNN decoder has an advantage (if you train the edit-rule-based lemmatizer on noisy data, the number of rules grows up fast).

But according to measurements, it is still better in terms of performance to first run a GEC on the noisy input and only then run the lemmatizer (rule-based) than just train a lemmatizer on noisy input, so we still stick to the rule-based one :-)

HaukurPall commented 3 years ago

I have tried using a rule-based approach for Icelandic (Nefnir [https://www.aclweb.org/anthology/W19-6133.pdf]). The RNN decoder indeed performs slightly better but the errors it does are worse. Some lemmas from the RNN are gibberish, whilst the errors from the rule-based one are more understandable. Decoding time is not an issue in my current iteration.

One of the reasons why I prefer to use an RNN decoder is the fact that context can be more easily incorporated. One class of errors a rule-based lemmatizer cannot be expected to handle (without some clever preprocessing) is split compounds in Icelandic: "Stjórnunar- og skipulagsfundur" -> "Management and organizational meeting" where "fundur" -> "meeting" is omitted in the first word. The correct lemma is "stjórnunarfundur". In order to get this class of errors right, some context is required. This might be considered as an edge-case and handled by other means, but I would like to at least try to get an RNN decoder working which could resolve these kinds of issues.

On @foxik note, an RNN decoder might also work better on noisy input.