weinman / cnn_lstm_ctc_ocr

Tensorflow-based CNN+LSTM trained with CTC-loss for OCR
GNU General Public License v3.0
498 stars 170 forks source link

Using Multiple GPU as a train_device #51

Closed sahilbandar closed 4 years ago

sahilbandar commented 5 years ago

I need just small help in training the model in multiple GPU so as option is availble --train_device I'm able to mention only one device. How I can mention the both of gpu as train device.

weinman commented 5 years ago

What revision are you using? The --train_device option was removed with the latest version in favor of --num_gpus.

NB: You can't fine tune if you start a model with multiple GPUs, because the tf.train.Scaffold doesn't work

sahilbandar commented 5 years ago

yes, I am using the old source.

weinman commented 5 years ago

You should pull the latest master (or tag tf-1.10) to get multi-GPU training functionality. The previous version doesn't support it, even if you make multiple GPUs visible to CUDA, I don't believe the training will be distributed across them. We now utilize tf.distribute.MirroredStrategy for that.