I see that in tutorial:
t2t-decoder \ --data_dir=$DATA_DIR \ --problem=$PROBLEM \ --model=$MODEL \ --hparams_set=$HPARAMS \ --output_dir=$TRAIN_DIR \ --decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \ --decode_from_file=$DECODE_FILE \ --decode_to_file=translation.en
now I'm trying to decode a translation experiment, like 2 million sentences, I think it's more appropriate to decode on multiple gpus
then I searched t2t-decoder FLAG settings, but I didn't find worker_gpu parameters
can anyone give me a hint?
I suggest that you should split your input sentences into several pieces so that you can decode N files per gpu ,in this way you get decoded results simultaneously in paralle.
Description
I see that in tutorial:
t2t-decoder \ --data_dir=$DATA_DIR \ --problem=$PROBLEM \ --model=$MODEL \ --hparams_set=$HPARAMS \ --output_dir=$TRAIN_DIR \ --decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \ --decode_from_file=$DECODE_FILE \ --decode_to_file=translation.en
now I'm trying to decode a translation experiment, like 2 million sentences, I think it's more appropriate to decode on multiple gpus then I searched t2t-decoder FLAG settings, but I didn't find worker_gpu parameters can anyone give me a hint?