tensorflow / tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Apache License 2.0
15.59k stars 3.51k forks source link

Question: Does t2t-decoder supports multiple gpu decoding? #905

Open vergilus opened 6 years ago

vergilus commented 6 years ago

Description

I see that in tutorial: t2t-decoder \ --data_dir=$DATA_DIR \ --problem=$PROBLEM \ --model=$MODEL \ --hparams_set=$HPARAMS \ --output_dir=$TRAIN_DIR \ --decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \ --decode_from_file=$DECODE_FILE \ --decode_to_file=translation.en now I'm trying to decode a translation experiment, like 2 million sentences, I think it's more appropriate to decode on multiple gpus then I searched t2t-decoder FLAG settings, but I didn't find worker_gpu parameters can anyone give me a hint?

libeineu commented 6 years ago

I suggest that you should split your input sentences into several pieces so that you can decode N files per gpu ,in this way you get decoded results simultaneously in paralle.