taoxugit / AttnGAN

MIT License
1.33k stars 415 forks source link

The inception score of the pretrained model on birds dataset is 4.17. #17

Open MinfengZhu opened 6 years ago

MinfengZhu commented 6 years ago

Hi, I want to reproduce the experiment result in the paper. However, the inception score of the pretrained model on birds dataset is 4.17. I compute the inception score using https://github.com/hanzhanggit/StackGAN-inception-model. I have tried both pytorch 0.3 and 0.4, the inception score is still lower than 4.36 (reported in the AttnGAN paper).

gjyin commented 6 years ago

Same question. I got 4.15 for CUB_200_birds and 23.45 for coco using the offered model, both lower than the values in the paper.

qiaott commented 5 years ago

Hi, I got 23.91431 ± 0.3758566 for coco using the model the author provided. Can you provide the code of inception score that you used? maybe you used different settings, like "split"? @taoxugit

taoxugit commented 5 years ago

The inception score for models trained on coco is computed using improved-gan/inception_score. The text descriptions are from Validation set, by randomly sampling one sentence from 5 captions for each image.

taoxugit commented 5 years ago

We have released 2 models for CUB_200_birds, make sure you are computing inception score for AttnGAN rather than AttnDCGAN. The difference between AttnGAN and AttnDCGAN is described in our final version of CVPR paper.

ghost commented 5 years ago

Hi, I have generated 29280 images for birds dataset using your pre-trained model, but I got only 2.57 ± 0.02 inception score. How can I reproduce the experiment result in the paper?

attngan_inception_score

mkumar10 commented 5 years ago

I did the same as above with 29330 images and got 4.20 +- 0.04 which I think is still SOTA and its close enough cause there is some randomness. Note though I trained it from scratch.

Update : I get a 4.35 score with the pretrained model though.

REFunction commented 5 years ago

@mkumar10 When you got 4.35 score, did you test birds dataset with 29330 images? I got 4.20+-0.15 score with the pretrained model on 2930 bird images and the batch size is 10.

XiangChen1994 commented 5 years ago

I did the same as above with 29330 images and got 4.20 +- 0.04 which I think is still SOTA and its close enough cause there is some randomness. Note though I trained it from scratch.

Update : I get a 4.35 score with the pretrained model though.

How do you get the 4.35 score? I have tried batchsize=7 and batchsize=10, but the results are not 4.35. Batch_size10 mean: 4.17 std: 0.10 (2930 images) Batch_size7 mean: 4.25 std: 0.17 (2933 images) Could you please tell me what's wrong with my evaluation? Thank you!

ShihuaHuang95 commented 5 years ago

@windforever118 I get the same score with the pre-trained model, too. Then, I re-train the AttnGAN on birds dataset use the original codes, and get the same score, too. As @mkumar10 has mentioned, there is some randomness.

yuchuangou commented 5 years ago

Hi, I have generated 29280 images for birds dataset using your pre-trained model, but I got only 2.57 ± 0.02 inception score. How can I reproduce the experiment result in the paper?

attngan_inception_score

I got the same result 2.57 on CUB dataset using pre-trained model, did you solve that problem?

hywang66 commented 4 years ago

Hi, I have generated 29280 images for birds dataset using your pre-trained model, but I got only 2.57 ± 0.02 inception score. How can I reproduce the experiment result in the paper? attngan_inception_score

I got the same result 2.57 on CUB dataset using pre-trained model, did you solve that problem?

Same problem. Do you guys have any ideas about what's wrong?

hywang66 commented 4 years ago

Hi, I have generated 29280 images for birds dataset using your pre-trained model, but I got only 2.57 ± 0.02 inception score. How can I reproduce the experiment result in the paper? attngan_inception_score

I got the same result 2.57 on CUB dataset using pre-trained model, did you solve that problem?

I just figured it out! Please used the captions.pickle provided by the authors instead of the one you generated yourself. There is some randomness in the generation of the word dictionary, which is stored in this file.

yuchuangou commented 4 years ago

Hi, I have generated 29280 images for birds dataset using your pre-trained model, but I got only 2.57 ± 0.02 inception score. How can I reproduce the experiment result in the paper? attngan_inception_score

I got the same result 2.57 on CUB dataset using pre-trained model, did you solve that problem?

I just figured it out! Please used the captions.pickle provided by the authors instead of the one you generated yourself. There is some randomness in the generation of the word dictionary, which is stored in this file.

That's right. I have got the right score by using the pickle data.

qizhongjian commented 4 years ago

I did the same as above with 29330 images and got 4.20 +- 0.04 which I think is still SOTA and its close enough cause there is some randomness. Note though I trained it from scratch.

Update : I get a 4.35 score with the pretrained model though.

So i hope you can tell me your batch size about correct inception score.thank you

qizhongjian commented 4 years ago

I used caption.pickle provided by the authors to generate images on coco dataset,Finally,I just got 23.78 +- 0.14 inception score.Anybody know some idea about what's wrong? @taoxugit @anyone Thank you.I am a chinese,so my english is not good.Please don't mind.

LeoXing1996 commented 4 years ago

Hi, I have generated 29280 images for birds dataset using your pre-trained model, but I got only 2.57 ± 0.02 inception score. How can I reproduce the experiment result in the paper? attngan_inception_score

I got the same result 2.57 on CUB dataset using pre-trained model, did you solve that problem?

I just figured it out! Please used the captions.pickle provided by the authors instead of the one you generated yourself. There is some randomness in the generation of the word dictionary, which is stored in this file.

Hey, may you explain 'used the captions.pickle provided by the authors' in a little more detail? I train and evaluate my own model and the pretrain model with 'caption.pickle' provided in preprocessed metadata folder. However both my own model and pretrained ones get 3.20 and 3.34 in evaluation.

priyankaupadhyay090 commented 2 years ago

Hi, I want to reproduce the experiment result in the paper. However, the inception score of the pretrained model on birds dataset is 4.17. I compute the inception score using https://github.com/hanzhanggit/StackGAN-inception-model. I have tried both pytorch 0.3 and 0.4, the inception score is still lower than 4.36 (reported in the AttnGAN paper).

@taoxugit Hey, I am trying to reproduce inception score on bird dataset. However the pre-trained inception model link from this repo : https://github.com/hanzhanggit/StackGAN-inception-model is not accessible. If you still have the model files, could you please share it with me? Thank you

2nite2 commented 2 years ago

Hi, I want to reproduce the experiment result in the paper. However, the inception score of the pretrained model on birds dataset is 4.17. I compute the inception score using https://github.com/hanzhanggit/StackGAN-inception-model. I have tried both pytorch 0.3 and 0.4, the inception score is still lower than 4.36 (reported in the AttnGAN paper).

@taoxugit Hey, I am trying to reproduce inception score on bird dataset. However the pre-trained inception model link from this repo : https://github.com/hanzhanggit/StackGAN-inception-model is not accessible. If you still have the model files, could you please share it with me? Thank you

Hello, Have you got the model files? could you please share it. thank you

w791862948 commented 2 years ago

I did the same as above with 29330 images and got 4.20 +- 0.04 which I think is still SOTA and its close enough cause there is some randomness. Note though I trained it from scratch. Update : I get a 4.35 score with the pretrained model though.

How do you get the 4.35 score? I have tried batchsize=7 and batchsize=10, but the results are not 4.35. Batch_size10 mean: 4.17 std: 0.10 (2930 images) Batch_size7 mean: 4.25 std: 0.17 (2933 images) Could you please tell me what's wrong with my evaluation? Thank you!

Hey,I am meeting an error while running inception score IndexError: list index out of range. could you help me?Thank you

sabhiram6 commented 8 months ago

Hi, I want to reproduce the experiment result in the paper. However, the inception score of the pretrained model on birds dataset is 4.17. I compute the inception score using https://github.com/hanzhanggit/StackGAN-inception-model. I have tried both pytorch 0.3 and 0.4, the inception score is still lower than 4.36 (reported in the AttnGAN paper).

Hey if possible can u share the trained inception model . i am trying to calculate inception score but unable to get access of the trained inception model .

Thank You