rfelixmg / frwgan-eccv18

Code for model presented on our paper accepted on European Conference on Computer Vision 2018.
57 stars 23 forks source link

Why H can only reach 50%-51.5% #3

Closed nijian1 closed 5 years ago

nijian1 commented 5 years ago

Hi,

Can I ask why H can only reach 50%~51.5% instead of 53%?, i strictly run your code without making changes to the parameters.

Best Wishes,

rfelixmg commented 5 years ago

Hi Nijian1,

I have to readapt the framework to post it on GitHub. I will double check the hyperparameters, and get back to you soon. Are you talking about CUB?

Cheers, Rafa

nijian1 commented 5 years ago

@rfelixmg Much appreciation!

Yeah, i am referring to the CUB data set!

Moreover, i read your paper carefully, it’s great and inspiring,however, i have a question that in your experiment, you use the CUB dataset with 1024-dimensional semantic features instead of the commonly used 312-dimensional semantic features(per-class attributes(att)). Will this difference be unfair to those comparison methods such as ESZSL,LATEM,SAE,etc?

In the paper "Feature Generating Networks for Zero-Shot Learning",the f-clswan can reach 54% when class embeddings are per-class sentences(stc) on CUB.

Best Wishes,

rfelixmg commented 5 years ago

Dear @nijian1

I double check the values reported in the paper. This repository contains the training process for cycle-WGAN, reported right above the baseline in the paper (see Table 4). In the paper we report cycle-WGAN: {CUB, y(U): 46.0, y(S): 60.3, and H: 52.2}. Hence, you result 51.5% is considered close to the reported in the paper and might be different due to differences in computer architectures. Maybe, changing the seed value, you might achieve exactly the same accuracy.

Cheers, R.