Closed haowenli closed 7 years ago
Without GPU, the training (and decoding) will be very slow, even when using a fast CPU with e.g. 32 cores. I would suggest to use some of the toy problems (algorithmic). If you really want to try machine translation, use transformer_tiny
and the TranslateEndeWmt8k problem (which used to be called wmt_ende_tokens_8k
in older t2t versions), which is likely to train fast (but with lower final BLEU, of course). If TensorFlow does not find a GPU, it automatically switches to CPU.
I am using the transformer model which is an example in README. I haven't NVIDIA card in my computer. How can I use HParams to train transformer model in README? HParams (by model):