-
Hello,
I am trying to use this work and I am following the provided steps.
The instructions state "Use ./BIGGAN/src/FairGAN++ for baseline and ./BIGGAN/src/Tfl_2D_LP_FT for our proposed work" bu…
-
Self-supervision/semi-supervised learning is ultra-hot now, with new SOTAs being set in DRL using shockingly simple method, and self-supervised being competitive with classical supervised CNNs at Imag…
gwern updated
4 years ago
-
BigGAN/StyleGAN work well for unconditional inputs, and for categorical-classified inputs, but for tags or text embeddings, we do not have working GANs. Arfa has established with StyleGAN experiments …
gwern updated
4 years ago
-
Hello,
I was going through your paper and found out that you reported the inception scores for ImageNet validation to be **63.702±7.869** (for 299x299 image size)but the BigGAN paper reports it to b…
-
https://github.com/huggingface/pytorch-pretrained-BigGAN/blob/1e18aed2dff75db51428f13b940c38b923eb4a3d/pytorch_pretrained_biggan/model.py#L245-L246
I'm trying to understand the model by reading cod…
-
Hi, nice work! I am studying your subsequent work _Big GANs Are Watching You: Towards Unsupervised Object Segmentation with Off-the-Shelf Generative Models_ which is based on the technique proposed in…
-
Feature: experiment with _z_ which are a mix of normals, censored normals, binomials, and categoricals.
Typically, in almost all GANs, the original _z_ random noise is just a bunch of Gaussian vari…
gwern updated
3 years ago
-
```python
import paddle.fluid as fluid
import paddle
from paddle.fluid import layers
import paddle.fluid.dygraph as dg
import matplotlib.pyplot as plt
import numpy as np
class SoftMax(dg.Laye…
yxhpy updated
6 months ago
-
I'm training a BigGAN with differential augmentation and LeCam optimization on a custom dataset. My setup features 4 NVIDIA RTX 3070 and I'm running on Ubuntu 20.04. I observe that running the trainin…
-
Hello,
I am using ImageNet 64x64 and run the code with the following command :
python BigGAN-PyTorch/train.py --dataset I64_hdf5 --parallel --shuffle --num_workers 8 --batch_size 128 --num_G_…