Unipisa / diaparser

Direct Attentive Dependency Parser
MIT License
51 stars 20 forks source link

Fine-tuning #5

Open ArijRB opened 3 years ago

ArijRB commented 3 years ago

Hellor, Thank you for sharing your code. Have you tried to fine-tune the bert-like model during the fine-tuning? If I understood well you don't use the tags when using the bert embeddings, did you try using both?

Also there is 2 minor changes for the char_lstm.py file to work : Line 42 replace n_embed by n_word_embed and return None with embed to avoid changing the training code.

Thank you in advance.

MustafaCeyhan commented 1 year ago

hi ArijRB.

What exactly do you mean by "return None with embed to avoid changing the training code."?

LuceleneL commented 7 months ago

I ran into the same issue while trying to do training.

I did fix at line 42 of char_lstm.py the variable from n_embed to n_word_embed, but I didn't realized what "return None with embed" mean. Anyone can elaborate a little more about it?

Thanks in advance.