Trained a 256x256 config-e model on the FFHQ dataset (see #2)
Training was done on Colab, so in a few sessions using 1 GPU.
pkl taken at a FID of 11.2. In theory it should be much better than this, but I found the default learning rate to be slightly unstable, and dropping any lower and progress was very slow.
Trained a 256x256 config-e model on the FFHQ dataset (see #2)
Training was done on Colab, so in a few sessions using 1 GPU.
pkl taken at a FID of 11.2. In theory it should be much better than this, but I found the default learning rate to be slightly unstable, and dropping any lower and progress was very slow.