tensorflow / tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Apache License 2.0
15.5k stars 3.49k forks source link

not use all gpu memory - t2t training #1696

Open 7DaeBum opened 5 years ago

7DaeBum commented 5 years ago

Description

why not use all gpu memory - t2t training? nvidia-smi reports low GPU

image

Training commando

t2t-trainer \ --t2t_usr_dir=$USR_DIR \ --data_dir=$DATA_DIR \ --problem=neural_lemmatizer \ --model=transformer \ --hparams_set=transformer_tiny \ --output_dir=$TRAIN_DIR --train_steps=10000 \ --eval_steps=100 \ --worker_gpu=2 \

Environment information

OS: Ubuntu 16.04

$ pip freeze | grep tensor
# your output here
mesh-tensorflow==0.0.5
tensor2tensor==1.14.0
tensorboard==1.14.0
tensorflow-datasets==1.2.0
tensorflow-estimator==1.14.0
tensorflow-gan==1.0.0.dev0
tensorflow-gpu==1.14.0
tensorflow-metadata==0.14.0
tensorflow-probability==0.7.0

$ python -V
# your output here

python 2.7.12

xinjianlv commented 4 years ago

i have the same problem, and i used 4 titan xp gpus.

noahchalifour commented 4 years ago

I also have the same issue, with a single 1080ti