fastnlp / TENER

Codes for "TENER: Adapting Transformer Encoder for Named Entity Recognition"
373 stars 55 forks source link

For the unigram and bigram embedding for Chinese dataset, what if i wanna train my own set of embedding?? #22

Open marcusau opened 4 years ago

marcusau commented 4 years ago

Hi,

Thanks for your amazing work.

For the following unigram and bigram embedding, If I wanna try my own set of vec files, what can i do? T

For the Chinese datasets, you can download the pretrained unigram and bigram embeddings in Baidu Cloud. Download the 'gigaword_chn.all.a2b.uni.iter50.vec' and 'gigaword_chn.all.a2b.bi.iter50.vec'. Then replace the embedding path in train_tener_cn.py

Thanks a lot,

Marcus

yhcc commented 4 years ago

Thanks for your attention. You can use the word2vec or glove algorithm (their original version are based on English, therefore, these codes usually take space as the separator, you might need to add space between Chinese characters or bigrams ) to train your own word vectors, and you can use the wiki corpus to train your word vector models. For a sentence like "复旦大学" , the unigram sequence is [“复”, “旦”,“大”,“学”],the bigram sequence is ["复旦", "旦大",“大学”,“学\<EOS>”]. Based on my experience, the larger word vector dimension, the better performance (300d>100d>50d). Since the Chinese corpus are not large, usually you only need less than 1 hour to train 5 epochs to get the vectors.