yunjey / stargan

StarGAN - Official PyTorch Implementation (CVPR 2018)
MIT License
5.21k stars 968 forks source link

same dataset, same parameters but terrible results #67

Open SummerHuiZhang opened 6 years ago

SummerHuiZhang commented 6 years ago

I tried to repeat the work in StarGAN paper with RaFD dataset. I didn't change the parameters but the results is terrible. It looks the same of each row, which should be with different attributes like 'happy sad angry contemptuous'. The only difference is the blurry mouth. Does anyone has repeat this work???

Rabia-Raja commented 6 years ago

I am running this code. But I am facing some errors. Can you help me? The errors are: Traceback (most recent call last): File "main.py", line 110, in main(config) File "main.py", line 32, in main 'CelebA', config.mode, config.num_workers) File "/home/chenky/rabia/star-gan/StarGAN-master/data_loader.py", line 84, in get_loader dataset = CelebA(image_dir, attr_path, selected_attrs, transform, mode) File "/home/chenky/rabia/star-gan/StarGAN-master/data_loader.py", line 24, in init self.preprocess() File "/home/chenky/rabia/star-gan/StarGAN-master/data_loader.py", line 49, in preprocess idx = self.attr2idx[attr_name] KeyError: 'Black_Hair

SummerHuiZhang commented 5 years ago

@Rabia-Raja I guess that it's because of the format of your attri_list_celeba.txt? The first row is the number of images, second attribute (like black hair, with glasses and etc.), and each row of the following should be like 'image name, 1 -1 1 -1 -1 -1 1'.

krishnadubba commented 5 years ago

I kind of get some reasonable results but nothing like the ones in the paper. There are some celeb pics where it does not change anything for expressions, only get expected results occasionally. Note I changed the image_size to 128 for "both" dataset option.

Some samples (the last 8 cols in each row are for different expressions): https://drive.google.com/file/d/1jbdKGEfwT_KYn00alEMPEHAY-vEyGphl/view?usp=sharing

krishnadubba commented 5 years ago

The authors seem to suggest 256 as image_size for "Both" datasets, but I didn't see any improvement. Infact 256 image_size models give strange ghost like artifacts on the face. You can check this comparing their pre-trained 128 and 256 image_size celebA only dataset models. 128 image_size models seems to be slightly better.