FoundationVision / LlamaGen

Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
https://arxiv.org/abs/2406.06525
MIT License
1.34k stars 55 forks source link

FID results of GPT-L and GPT-1B on 256*256 images #46

Open LutingWang opened 4 months ago

LutingWang commented 4 months ago

Hi, thanks for the excellent work. I'm trying to reproduce the results on 256*256 images. The VQGAN model is reproduced successively, achieving $2.10$ rFID. However, the AR part experiences a significant performance gap. More specifically, I use 8 A100-80G GPU to run the following scripts

bash scripts/autoregressive/train_c2i.sh --cloud-save-path xxx --code-path xxx --gpt-model GPT-L --epochs 50
bash scripts/autoregressive/train_c2i.sh --cloud-save-path xxx --code-path xxx --gpt-model GPT-1B --epochs 50

The training results are as follows

Model Final Loss FID Expected FID
GPT-L 7.86 4.62 4.22
GPT-1B 7.33 4.13 3.09

Is the final loss reasonable? Do you have any idea what the reason might be?

Thanks!

PeizeSun commented 4 months ago

Hi~ I don’t understand what is reproducing the result on 224x224. The expected FID is in 256x256.

LutingWang commented 4 months ago

Hi~ I don’t understand what is reproducing the result on 224x224. The expected FID is in 256x256.

Sorry for the mistake. I was trying to emphasize that the image resolution is not 384x384, but I mistakenly wrote 224.

msed-Ebrahimi commented 4 months ago

Hi~ I don’t understand what is reproducing the result on 224x224. The expected FID is in 256x256.

Hi. Thank you for this awesome repo. I have the same issue with the original code that the loss ends around 7.3 after 300 epochs. IMG_0379