I have finished the training of 2 stages on CUB and I can generate the samples during the test.
During the test, I also use the .pickle file extracted from the char-CNN-RNN text embeddings file of CUB as the embedding. But I failed to match the caption with the generated images. I just use the corresponding description during test in [self.caption]( # self.captions = self.load_all_captions()), but they are not matched.
How to get the correct caption of the generated images?
I have finished the training of 2 stages on CUB and I can generate the samples during the test. During the test, I also use the .pickle file extracted from the char-CNN-RNN text embeddings file of CUB as the embedding. But I failed to match the caption with the generated images. I just use the corresponding description during test in [self.caption]( # self.captions = self.load_all_captions()), but they are not matched. How to get the correct caption of the generated images?
Much appreciation!