bharat-b7 / MultiGarmentNetwork

Repo for "Multi-Garment Net: Learning to Dress 3D People from Images, ICCV'19"
282 stars 65 forks source link

Running on a single GPU #3

Closed Eby-123 closed 4 years ago

Eby-123 commented 4 years ago

This code runs on multiple GPU. If I want to run on a single GPU, how could I do?

bharat-b7 commented 4 years ago

Just comment out the lines with "with tf.device('/gpu:x'):" You might also have to reduce the batch size according to your GPU memory

Eby-123 commented 4 years ago

Thank you ! Your code has been tested in python 2.7, Tensorflow 1.13 for running MGN. But "Mesh Package" requires Python 3.5+. So if I want to run the code: python test_network.py,which python should I use?(2.7 or 3.5+)

bharat-b7 commented 4 years ago

The code has been tested with python 2.7, though it should work with python 3.5 as well (you would have to update/modify packages such as dirt, cPickle accordingly). Alternatively you can use the older release of MPI mesh package.

andrewjong commented 4 years ago

How can we change the batch size for running test_network.py? I see a train.batch_size in config_ver1.py, but does that also affect inference?

I set that batch_size variable to 1 and still get Out of Memory errors on a single RTX 2080 GPU with 8GB.

bharat-b7 commented 4 years ago

the script test_network.py currently uses a batch size of 2. It loads data dat = pkl.load(open('assets/test_data.pkl')) and uses what ever is the batch dimension of the data (this is 2 in test_data.pkl) as batch size. You can reduce it down to 1 by slicing entries in test_data.pkl.

andrewjong commented 4 years ago

Thanks! For anyone else, I just added a simple dict comprehension after loading in the data to slice each element.

# test_network.py
## Load test data
dat = pkl.load(open('assets/test_data.pkl', "rb"), encoding="latin1") # added this encoding stuff for Python3
dat = {k: v[:config.train.batch_size] for k, v in dat.items()} # added this to slice