-
In line 260, resnet_biggan_deep.py, z already be set into tf.concat([z,y],1), while in line 285, when feed parameters to the resnet block, z is treated as the original latents code and y is repeatedly…
-
The function `parse_tfrecord_progan` returns the parsed images as a tf.float32 however the labels are returned as an int64, this causes the ConcatV2 to complain of varying types
```
2019-10-03 0…
-
Hi,
After training on my dataset, the filled result seem to have more impact from contextual attention rather than the shapes. The model is not learning the feature and shape.
![landsat1429](https:/…
-
Hi, I've read your paper "High-Fidelity Image GenerationWith Fewer Labels". It's a very fascinating work but I have one question about the pretrained feature extractor F.
![image](https://user-image…
-
From the paper, they allude to the architecture of the neural network being very similar to [BigGAN](https://github.com/ajbrock/BigGAN-PyTorch). Would it be worth taking the placement of the self atte…
-
I cut the num_works to 0 due to lack of RAM and run BigGAN_bs256x8.sh, ended up with error bellow
![image](https://user-images.githubusercontent.com/33709183/64478464-77d4f980-d1db-11e9-8dc2-2485cb5e…
-
Thanks for Publishing the code and appreciate if you could help me understand this.
I trained a WGAN on my own data. Now, i am planning to use the generator network features[weights] to calculate …
-
Same question as the title, is there any way we can add our own images for ganbreeder to utilize?
-
I would like to reproduce your results, with BigGan on the Anime dataset. Did you compute the FID and / or the IS on the generated dataset present in your README ?
Thanks
-
Hi, csmliu
At present, the results of training on high-resolution images are not good. Is there any other way to improve the conversion results of high-resolution images, such as 512 or 1024 images?I…