sherjilozair / char-rnn-tensorflow

Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow
MIT License
2.64k stars 960 forks source link

ENH: the allocation to CPU is unneeded #80

Closed madrugado closed 7 years ago

madrugado commented 7 years ago

TF could place operations to appropriate devices without setting this explicitly, the embedding op is implemented on GPU since 0.9, so the declaration could be safely omitted. Also the placement on GPU gives performance boost about 5-10% in my experiments

hugovk commented 7 years ago

@madrugado There's a merge conflict in this PR.

wichtounet commented 7 years ago

I have tested this on a project derived from this code and it works perfectly fine.

ubergarm commented 7 years ago

@madrugado @hugovk @wichtounet Thanks guys, I also tested and training runs on my GPU just fine without the explicit device call.