Open 7DaeBum opened 5 years ago
why not use all gpu memory - t2t training? nvidia-smi reports low GPU
Training commando
t2t-trainer \ --t2t_usr_dir=$USR_DIR \ --data_dir=$DATA_DIR \ --problem=neural_lemmatizer \ --model=transformer \ --hparams_set=transformer_tiny \ --output_dir=$TRAIN_DIR --train_steps=10000 \ --eval_steps=100 \ --worker_gpu=2 \
OS: Ubuntu 16.04 $ pip freeze | grep tensor # your output here mesh-tensorflow==0.0.5 tensor2tensor==1.14.0 tensorboard==1.14.0 tensorflow-datasets==1.2.0 tensorflow-estimator==1.14.0 tensorflow-gan==1.0.0.dev0 tensorflow-gpu==1.14.0 tensorflow-metadata==0.14.0 tensorflow-probability==0.7.0 $ python -V # your output here
python 2.7.12
i have the same problem, and i used 4 titan xp gpus.
I also have the same issue, with a single 1080ti
Description
why not use all gpu memory - t2t training? nvidia-smi reports low GPU
Training commando
t2t-trainer \ --t2t_usr_dir=$USR_DIR \ --data_dir=$DATA_DIR \ --problem=neural_lemmatizer \ --model=transformer \ --hparams_set=transformer_tiny \ --output_dir=$TRAIN_DIR --train_steps=10000 \ --eval_steps=100 \ --worker_gpu=2 \
Environment information
python 2.7.12