-
Dataset used in confusing.
You did not indicate what real image is used in a particular DM images.
In thiis folder structure:
```
Testset directory
|--biggan_256
|--biggan_512
.
.
.
|--r…
-
In the [BigGAN paper](https://arxiv.org/pdf/1809.11096.pdf), one of the important features is the use of large minibatches, _n_=2048 for most of the results. Table 1 shows that FID/Inception improve c…
gwern updated
5 years ago
-
The function `one_hot_from_names` throws an AssertionError when a class name - which is not in the original ImageNet classes and for which possible synsets do not exist either - is used.
This happ…
-
Hi everyone,
I implemented three TPU enabled PyTorch training repos for BigGAN-PyTorch, all of which are based on this repo.
[BigGAN-PyTorch-TPU-Single](https://github.com/shizhediao/BigGAN-PyTorc…
-
Hi, I find that there are some details in the implementation of BigGAN worth paying attention to.
First, I notice that the default moments used for batchnorm during inference are the accumulated va…
-
The just-published [EvoNorm normalization layer paper](https://arxiv.org/abs/2004.02967#deepmind "'Evolving Normalization-Activation Layers', Liu et al 2020") (co-authored by mooch) evolves several re…
gwern updated
3 years ago
-
Something to consider as a highly speculative research project once Ganbooru is done.
---
A distinct trend in recent DL has been self-attention moving beyond sequence or text data to image data …
gwern updated
4 years ago
-
Not at all understanding the internals, but tried replacing:
https://tfhub.dev/deepmind/biggan-256/2
with:
https://tfhub.dev/deepmind/biggan-512/2
on a local Windows10 install, I see this…
-
Is there a generator available for BigGAN 256x256? Is it compatible on this?
-
Hi ajbrock,
Thanks for open-source BigGAN code, which benefits a lot on other explore of BigGAN. But considering the cost of time, we hope to train biggan on TPU, so we use a version of tensorf…