tdeboissiere / DeepLearningImplementations

Implementation of recent Deep Learning papers
MIT License
1.81k stars 650 forks source link

WassersteinGAN triaining CelebA problem:MemoryError #32

Closed firestonelib closed 7 years ago

firestonelib commented 7 years ago

when I run : python main.py --backend tensorflow --generator deconv --dset celebA --lr_G 1E-3 --lr_D 1E-3 --clamp_lower -0.5 --clamp_upper 0.5 --batch_size 512 --noise_dim 128 It traceback:

Traceback (most recent call last):
  File "main.py", line 85, in <module>
    launch_training(**d_params)
  File "main.py", line 11, in launch_training
    train_WGAN.train(**kwargs)
  File "/mnt/data1/daniel/codes/GAN/DeepLearningImplementations/WassersteinGAN/src/model/train_WGAN.py", line 47, in train
    X_real_train = data_utils.load_image_dataset(dset, img_dim, image_dim_ordering)
  File "../utils/data_utils.py", line 94, in load_image_dataset
    X_real_train = load_celebA(img_dim, image_dim_ordering)
  File "../utils/data_utils.py", line 83, in load_celebA
    X_real_train = normalization(X_real_train, image_dim_ordering)
  File "../utils/data_utils.py", line 16, in normalization
    X = (X - 0.5) / 0.5
MemoryError

Anybody help?

firestonelib commented 7 years ago

@tdeboissiere

tdeboissiere commented 7 years ago

Looks like you're short on RAM. In the load image dataset, you may choose to load only a fraction of the dataset in memory.

In the Colorful folder, there is code to load data on the fly. This removes the RAM limitation at the cost of slower training.

firestonelib commented 7 years ago

@tdeboissiere which code and how to use it?

tdeboissiere commented 7 years ago

In the README there : https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/Colorful/src/model, you'll find the "training_mode" command line argument. From there you can trace back the code to load on the fly.