fani-lab / OpeNTF

Neural machine learning methods for Team Formation problem.
Other
18 stars 13 forks source link

2023-IEEE Access-A Smaller and Better Word Embedding for Neural #238

Open thangk opened 3 months ago

thangk commented 3 months ago

Link: IEEE Access

Main problem

The traditional methods don’t account for relations between word embeddings thus leaving room for more inaccuracy in translation results.

Proposed method

The research paper author aims to fix that problem by proposing a method that introduces two key components of relation embedding and shared embedding. The author claims that it’s key to enhancing the result of the tasks, especially for low-resource tasks.

My Summary

The researchers in this paper proposed their own word embedding method to be used with Neural Machine Translation (NMT) systems. One of the key differences between the method this paper claims to deliver opposed to other methods is that this paper’s method retains the “knowledge of the association between words to the training process” which helps with improving the BLEU performance in several datasets which are low-resource tasks such as WMT’ 14 English->German, and Global Voices v2018q4 Spanish->Czech (i.e., 15k sentence pairs). The paper’s proposed method also delivers a smaller parameter model as a “bonus” (as much as 15%) compared to the baselines. The researcher claims the method works for various NMT systems. However, in future works, it is yet to be tested in other NLP tasks such as dialogue generation and question answering.

Datasets

WMT14 English-German Global Voices v2018q4 Spanish-Czech WMT14 English-French Russian-Spanish