huhengtong / UKD_CVPR2020

The source code for the CVPR2020 paper "Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing".
20 stars 12 forks source link

Two little problems #3

Open zhangcheng-007 opened 4 years ago

zhangcheng-007 commented 4 years ago

First: In the pretrain file, the calculated map should be based on the test_txt. So the note should be canceled. Second: In the teacher train file, the updated discriminator model should be saved. So the note should also be canceled. Is that right? Cause I use the the cloud to run deep learning, the speed is a little slow.

huhengtong commented 4 years ago

I'm not clear with the meaning of "so the note should be canceled", but I think you are right.

zhangcheng-007 commented 4 years ago

Two notes mean:

  1. In calculating MAP train_img_hash = compute_hashing(sess, model, train_img, 'image') test_img_hash = compute_hashing(sess, model, test_img, 'image') train_txt_hash = compute_hashing(sess, model, train_txt, 'text') test_txt_hash = compute_hashing(sess, model, test_txt, 'text')
  2. In the teacher_train.py's discriminator model average_map = 0.5 * (i2t_test_map + t2i_test_map) if average_map > map_best_val_dis: map_best_val_dis = average_map discriminator.save_model(sess, DIS_MODEL_BEST_FILE) I restart these two pieces of code. And I don't know how to download module'moxing' as Huawei provides.
zhangcheng-007 commented 4 years ago

Are these partially updated maps correct? If it is wrong, what should the correct update result look like? (1) train_I2T_Test_MAP: 0.6845 train_T2I_Test_MAP: 0.6619 (2) train_I2T_Test_MAP: 0.6857 train_T2I_Test_MAP: 0.6666 (3) train_I2T_Test_MAP: 0.6836 train_T2I_Test_MAP: 0.6595 (4) train_I2T_Test_MAP: 0.6882 train_T2I_Test_MAP: 0.6341 (5) train_I2T_Test_MAP: 0.6888 train_T2I_Test_MAP: 0.6294 (6) train_I2T_Test_MAP: 0.6906 train_T2I_Test_MAP: 0.6315 (7) train_I2T_Test_MAP: 0.6939 train_T2I_Test_MAP: 0.6214 (8) train_I2T_Test_MAP: 0.6935 train_T2I_Test_MAP: 0.5920 (9) train_I2T_Test_MAP: 0.6920 train_T2I_Test_MAP: 0.6084 (10)train_I2T_Test_MAP: 0.6906 train_T2I_Test_MAP: 0.6060 (11)train_I2T_Test_MAP: 0.6883 train_T2I_Test_MAP: 0.6005 (12)train_I2T_Test_MAP: 0.6856 train_T2I_Test_MAP: 0.5934 (13)train_I2T_Test_MAP: 0.6835 train_T2I_Test_MAP: 0.5804 (14)train_I2T_Test_MAP: 0.6825 train_T2I_Test_MAP: 0.5772 (15)train_I2T_Test_MAP: 0.6801 train_T2I_Test_MAP: 0.5795

huhengtong commented 4 years ago

For the experiment with 128-bit hash code, I think this map value is a little low. The ideal result is: map_i2i = 0.706, map_t2i = 0.707. By the way, if you want to run the teacher model faster, you can remove the generator and use the discriminator only (just like the pretrain file). And you should change the corresponding parameter, such as SELECTNUM, D_EPOCH.