facebookresearch / InferSent

InferSent sentence embeddings
Other
2.28k stars 471 forks source link

Train on gpu and different emb size #64

Closed hanhanzhai closed 6 years ago

hanhanzhai commented 6 years ago

Sorry for new on gpu training. 1) In the updated version, where should I configure to train on gpu? 2) If I would like to have a smaller embedding size, say 2048, is parameter enc_lstm_dim in params_model = {'bsize': 64, 'word_emb_dim': 300, 'enc_lstm_dim': 2048, 'pool_type': 'max', 'dpout_model': 0.0, 'version': 1} the right place to change? Is 1024 (1/2 of the actual embedding size) correct if the desired embedding size is 2048? Thank you very much!

aconneau commented 6 years ago

Hi,

sorry for the late reply. If you want the model to be on GPU, you just have to specify "model = model.cuda()". The parameters are actually hard-coded so once the model is trained you can't turn a 2048-LSTM to a 1024-LSTM. You'll have to re-train a new model.

Thanks