-
您好,请问nus-wide数据集是怎么清洗的?筛查后的结果不能符合论文中186,557大小数据集的要求。
-
Hi,
Thanks for your brilliant work for reproducing these methods. But when I follow your readme and train a 64bit DCMH on MIRFLICKR25K, the best results is 5 points less than paper(for i->t, the…
-
![image](https://github.com/WangGodder/deep-cross-modal-hashing/assets/67832925/bb47c724-7f8d-4167-b8db-860140b16421)
下载下来配置没改为啥map值和论文中的差距这么大
-
![P-R_curve](https://user-images.githubusercontent.com/27770541/57651216-5304a880-7592-11e9-8947-891e066be541.png)
I have plotted two Precision-Recall curves on the result of Flickr-25K. Some setting…
-
![P-R_curve](https://user-images.githubusercontent.com/27770541/57651216-5304a880-7592-11e9-8947-891e066be541.png)
I have plotted two Precision-Recall curves on the result of Flickr-25K. Some setting…
-
-
Hi,
I also re-implement DCMH myself, and it can re-produce the performance of flickr.
But it fails to re-produce the performance of NUS-WIDE, sticking at about `0.32` and `0.34` in mAP.
can you rep…
-
小弟最近下载了你的代码跑了一下DCMH,但是结果跟下面一个帖子差不多,两个数据集都比你贴出的结果差了有10%,很显然是不正确的,但是找不到理由,batchsize改成128也没用。想问下你有要注意的地方吗,我看目标函数肯定是没问题的,在迭代过程中epoch到150后map就基本不提升了。希望得到你的回复。
还有你的代码写得太好了,学术研究的代码框架结构写得这么工程,看得很舒服。
-
-
Hi, I have run you codes of **CMHH** on flickr of 64-bit hash, with data & CNN-F weights provided by you, and it's stuck at:
- `max MAP: MAP(i->t): 0.7970, MAP(t->i): 0.8032, MAP(i->i): 0.7194, MAP(t…