Closed Eby-123 closed 4 years ago
Just comment out the lines with "with tf.device('/gpu:x'):" You might also have to reduce the batch size according to your GPU memory
Thank you ! Your code has been tested in python 2.7, Tensorflow 1.13 for running MGN. But "Mesh Package" requires Python 3.5+. So if I want to run the code: python test_network.py,which python should I use?(2.7 or 3.5+)
The code has been tested with python 2.7, though it should work with python 3.5 as well (you would have to update/modify packages such as dirt, cPickle accordingly). Alternatively you can use the older release of MPI mesh package.
How can we change the batch size for running test_network.py
? I see a train.batch_size
in config_ver1.py
, but does that also affect inference?
I set that batch_size variable to 1 and still get Out of Memory errors on a single RTX 2080 GPU with 8GB.
the script test_network.py
currently uses a batch size of 2.
It loads data dat = pkl.load(open('assets/test_data.pkl'))
and uses what ever is the batch dimension of the data (this is 2 in test_data.pkl
) as batch size.
You can reduce it down to 1 by slicing entries in test_data.pkl
.
Thanks! For anyone else, I just added a simple dict comprehension after loading in the data to slice each element.
# test_network.py
## Load test data
dat = pkl.load(open('assets/test_data.pkl', "rb"), encoding="latin1") # added this encoding stuff for Python3
dat = {k: v[:config.train.batch_size] for k, v in dat.items()} # added this to slice
This code runs on multiple GPU. If I want to run on a single GPU, how could I do?