lena-voita / good-translation-wrong-in-context

This is a repository with the data and code for the ACL 2019 paper "When a Good Translation is Wrong in Context: ..." and the EMNLP 2019 paper "Context-Aware Monolingual Repair for Neural Machine Translation"
97 stars 18 forks source link

How to utilise all available GPU memory? #2

Closed M4t1ss closed 5 years ago

M4t1ss commented 5 years ago

Hi! How do I get the model to utilise all available GPU memory on each GPU? I tried changing --batch-len , --optimizer, --optimizer-opts and some other parameters, but I can't seem to get it to use anything other than 416MiB per GPU.

Here I'm training 3 models in parallel: image

Thanks!

M4t1ss commented 5 years ago

It seems I had problems with cuda and cudnn versions... works just great on a different machine :)