jerryli27 / TwinGAN

Twin-GAN -- Unpaired Cross-Domain Image Translation with Weight-Sharing GANs
Apache License 2.0
719 stars 99 forks source link

Name of dataset unknown error on CelebA and Getchu Datasets #26

Closed abdullahzameek closed 5 years ago

abdullahzameek commented 5 years ago

Hi! This is a really cool project, and thank you for sharing it publicly!

I'm trying to retrain your model to get some hands-on experience with GANs, but I'm having a bit of an issue setting up the training task. I've downloaded the CelebA dataset and ran the celeba_convert.py script on it to pre-process it into tfrecord files, and I also downloaded the getchu dataset you prepared.

I set up a conda env with all the requirements and I created a bash file with the training script you provided and changed the file paths accordingly. I get the following error, where it says "Name of Dataset unknown". Could it be because of the way I've structured my training script?

Would really appreciate any insight!

python pggan_runner.py --program_name=twingan --dataset_name="image_only" --dataset_dir="data/celeba/" --unpaired_target_dataset_name="anime_faces" --unpaired_target_dataset_dir="data/anime_faces/" --train_dir="checkpoints/twingan_faces/" --dataset_split_name=train --preprocessing_name="danbooru" --resize_mode=RESHAPE --do_random_cropping=True --learning_rate=0.0001 --learning_rate_decay_type=fixed --is_training=True --generator_network="pggan" --use_unet=True --num_images_per_resolution=300000 --loss_architecture=dragan --gradient_penalty_lambda=0.25 --pggan_max_num_channels=256 --generator_norm_type=batch_renorm --hw_to_batch_size="{4: 8, 8: 8, 16: 8, 32: 8, 64: 8, 128: 4, 256: 3, 512: 2}" --do_pixel_norm=True --l_content_weight=0.1 --l_cycle_weight=1.0

WARNING:tensorflow:Checkpoint for resolution 4 does not exist yet! Falling back to the previous checkpoint. Traceback (most recent call last): File "pggan_runner.py", line 164, in tf.app.run() File "/home/imachines/anaconda3/envs/twingan/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run _sys.exit(main(argv)) File "pggan_runner.py", line 154, in main model.main() File "/home/imachines/Desktop/TwinGAN/TwinGAN/image_generation.py", line 1055, in main super(GanModel, self).main() File "/home/imachines/Desktop/TwinGAN/TwinGAN/model/model_inheritor.py", line 991, in main dataset = self._select_dataset() File "/home/imachines/Desktop/TwinGAN/TwinGAN/image_generation.py", line 203, in _select_dataset dataset = super(GanModel, self)._select_dataset() File "/home/imachines/Desktop/TwinGAN/TwinGAN/model/model_inheritor.py", line 326, in _select_dataset FLAGS.dataset_name, FLAGS.dataset_split_name, FLAGS.dataset_dir) File "/home/imachines/Desktop/TwinGAN/TwinGAN/datasets/dataset_factory.py", line 79, in get_datasetName of dataset unknown %s' % name raise ValueError('Name of dataset unknown %s' % name) ValueError: Name of dataset unknown trainModel.sh: line 4: --program_name=twingan: command not found trainModel.sh: line 5: --dataset_name=image_only: command not found trainModel.sh: line 6: --dataset_dir=datasets/celeba/: No such file or directory trainModel.sh: line 7: --unpaired_target_dataset_name=anime_faces: command not found trainModel.sh: line 8: --unpaired_target_dataset_dir=datasets/anime_faces/: No such file or directory trainModel.sh: line 9: --train_dir=checkpoints/twingan_faces/: No such file or directory trainModel.sh: line 10: --dataset_split_name=train: command not found trainModel.sh: line 11: --preprocessing_name=danbooru: command not found trainModel.sh: line 12: --resize_mode=RESHAPE: command not found trainModel.sh: line 13: --do_random_cropping=True: command not found trainModel.sh: line 14: --learning_rate=0.0001: command not found trainModel.sh: line 15: --learning_rate_decay_type=fixed: command not found trainModel.sh: line 16: --is_training=True: command not found trainModel.sh: line 17: --generator_network=pggan: command not found trainModel.sh: line 18: --use_unet=True: command not found trainModel.sh: line 19: --num_images_per_resolution=300000: command not found trainModel.sh: line 20: --loss_architecture=dragan: command not found trainModel.sh: line 21: --gradient_penalty_lambda=0.25: command not found trainModel.sh: line 22: --pggan_max_num_channels=256: command not found trainModel.sh: line 23: --generator_norm_type=batch_renorm: command not found trainModel.sh: line 24: --hw_to_batch_size={4: 8, 8: 8, 16: 8, 32: 8, 64: 8, 128: 4, 256: 3, 512: 2}: command not found trainModel.sh: line 25: --do_pixel_norm=True: command not found trainModel.sh: line 26: --l_content_weight=0.1: command not found trainModel.sh: line 27: --l_cycle_weight=1.0: command not found

abdullahzameek commented 5 years ago

To add to this, I installed the requirements as mentioned, but I installed Tensorflow-gpu v1.8 instead. My GPU driver specifications are as follows: NVIDIA-SMI : 390.116 CUDA : v9.0 (v9.0.176) CuDNN v.7.0.5

OS : Ubuntu 18.04

jerryli27 commented 5 years ago

The reason is because you are not commenting out the newline at the end of each line. Bash thinks each line is by itself a single command. So your bash script should really look like this:

python pggan_runner.py \ --program_name=twingan \ --dataset_name="image_only" \ --dataset_dir="data/celeba/" \ ...blablabla

By the way, if you are trying to learn GAN, my repo is probably not the best first-stop. You may want to take a look at https://github.com/NVlabs/MUNIT, which is better maintained.

abdullahzameek commented 5 years ago

The reason is because you are not commenting out the newline at the end of each line. Bash thinks each line is by itself a single command. So your bash script should really look like this:

python pggan_runner.py --program_name=twingan --dataset_name="image_only" --dataset_dir="data/celeba/" ...blablabla

By the way, if you are trying to learn GAN, my repo is probably not the best first-stop. You may want to take a look at https://github.com/NVlabs/MUNIT, which is better maintained.

Oh I see, thank you for clarifying! I'll check the other repository out and come back to yours once I'm more familiar with GAN models!