Closed zhangdongxu closed 8 years ago
I achieve this by three steps: 1.Use preprocess.py and print vocab to know the word2id mapping. 2.Then save wordvector model with key "W_0_enc_approx_embdr" and "W_0_dec_approx_embdr". 3.Finally use lm_model.load('data/wordvector.npz') in the train.py
For example initialing approx_embedder with pretrained word vectors. Any suggestion?