Open zhangcheng-007 opened 4 years ago
I'm not clear with the meaning of "so the note should be canceled", but I think you are right.
Two notes mean:
Are these partially updated maps correct? If it is wrong, what should the correct update result look like? (1) train_I2T_Test_MAP: 0.6845 train_T2I_Test_MAP: 0.6619 (2) train_I2T_Test_MAP: 0.6857 train_T2I_Test_MAP: 0.6666 (3) train_I2T_Test_MAP: 0.6836 train_T2I_Test_MAP: 0.6595 (4) train_I2T_Test_MAP: 0.6882 train_T2I_Test_MAP: 0.6341 (5) train_I2T_Test_MAP: 0.6888 train_T2I_Test_MAP: 0.6294 (6) train_I2T_Test_MAP: 0.6906 train_T2I_Test_MAP: 0.6315 (7) train_I2T_Test_MAP: 0.6939 train_T2I_Test_MAP: 0.6214 (8) train_I2T_Test_MAP: 0.6935 train_T2I_Test_MAP: 0.5920 (9) train_I2T_Test_MAP: 0.6920 train_T2I_Test_MAP: 0.6084 (10)train_I2T_Test_MAP: 0.6906 train_T2I_Test_MAP: 0.6060 (11)train_I2T_Test_MAP: 0.6883 train_T2I_Test_MAP: 0.6005 (12)train_I2T_Test_MAP: 0.6856 train_T2I_Test_MAP: 0.5934 (13)train_I2T_Test_MAP: 0.6835 train_T2I_Test_MAP: 0.5804 (14)train_I2T_Test_MAP: 0.6825 train_T2I_Test_MAP: 0.5772 (15)train_I2T_Test_MAP: 0.6801 train_T2I_Test_MAP: 0.5795
For the experiment with 128-bit hash code, I think this map value is a little low. The ideal result is: map_i2i = 0.706, map_t2i = 0.707. By the way, if you want to run the teacher model faster, you can remove the generator and use the discriminator only (just like the pretrain file). And you should change the corresponding parameter, such as SELECTNUM, D_EPOCH.
First: In the pretrain file, the calculated map should be based on the test_txt. So the note should be canceled. Second: In the teacher train file, the updated discriminator model should be saved. So the note should also be canceled. Is that right? Cause I use the the cloud to run deep learning, the speed is a little slow.