jiunbae / Lab

Lab experiments and issue manage
0 stars 0 forks source link

[Task] NIPS Paper (with 박사님) #36

Closed jiunbae closed 5 years ago

jiunbae commented 5 years ago
jiunbae commented 5 years ago

Unsupervised Deep Generative Adversarial Hashing Network

Abstract: Propose a new deep unsupervised hashing function, called HashGAN, which efficiently obtains binary representation of input images without any supervised pretraining, consists of generator, discriminator and encoder.

References

they require costly human-annotated labels to train their large set of parameters.

Supervised hash functions: 16, 30, 33, 52

  1. W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2074–2081. IEEE, 2012.

  2. W.-J. Li, S. Wang, and W.-C. Kang. Feature learning based deep supervised hashing with pairwise labels. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1711–1717. AAAI Press, 2016.

  3. J. Guo, S. Zhang, and J. Li. Hash learning with convolutional neural networks for semantic based image retrieval. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 227–238. Springer, 2016.

  4. R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retrieval via image representation learning. In AAAI, volume 1, pages 2156–2162, 2014.

Deep hash function, supervised:, 8, 27, 55, 59

  1. T.-T. Do, A.-D. Doan, and N.-M. Cheung. Learning to hash with binary deep neural network. In European Conference on Computer Vision (ECCV), pages 219–234. Springer, 2016.

  2. H. Lai, Y. Pan, Y. Liu, and S. Yan. Simultaneous feature learning and hash coding with deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3270–3278, 2015.

  3. H.-F. Yang, K. Lin, and C.-S. Chen. Supervised learning of semantics-preserving hash via deep convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.

  4. H. Zhu, M. Long, J. Wang, and Y. Cao. Deep hashing network for efficient similarity retrieval. In AAAI, pages 2415–2421, 2016.

Unsupervised hash functions: 18, 19, 46, 50

  1. K. He, F. Wen, and J. Sun. K-means hashing: An affinitypreserving quantization method for learning binary compact codes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2938–2945, 2013.

  2. J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon. Spherical hashing. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2957–2964. IEEE, 2012.

  3. J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for scalable image retrieval. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3424–3431. IEEE, 2010.

  4. Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Advances in neural information processing systems (NIPS), pages 1753–1760, 2009.

Unsupervised hashing methods use hand-crafted features: 2, 3, 29, 39

  1. A. Alahi, R. Ortiz, and P. Vandergheynst. Freak: Fast retina keypoint. In IEEE conference on Computer vision and pattern recognition (CVPR), pages 510–517. Ieee, 2012.

  2. M. Calonder, V. Lepetit, C. Strecha, and P. Fua. Brief: Binary robust independent elementary features. European conference on Computer Vision (ECCV), pages 778–792, 2010.

  3. S. Leutenegger, M. Chli, and R. Y. Siegwart. Brisk: Binary robust invariant scalable keypoints. In IEEE International Conference on Computer Vision (ICCV), pages 2548–2555. IEEE, 2011.

  4. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. Orb: An efficient alternative to sift or surf. In IEEE international conference on Computer Vision (ICCV), pages 2564–2571. IEEE, 2011.

Reference trace

Image similarity search in big datasets has gained tremendous attentions in different applications such as information retrieval, data mining and pattern recognition.

  1. J. Wang, H. T. Shen, J. Song, and J. Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.

Hashing functions provide a binary code to each image, and consequently reducing the similarity search, calculating the Hamming distance.

  1. Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In In Proc. of the IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2011.

  2. Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Advances in neural information processing systems (NIPS), pages 1753–1760, 2009.

  3. W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2074–2081. IEEE, 2012.

  4. W.-J. Li, S. Wang, and W.-C. Kang. Feature learning based deep supervised hashing with pairwise labels. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1711–1717. AAAI Press, 2016.

Hash functions are designed to extract distinctive patterns from images relevant to their semantic categorizes

  1. K. Lin, J. Lu, C.-S. Chen, and J. Zhou. Learning compact binary descriptors with unsupervised deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1183–1192, 2016.

  2. H.-F. Yang, K. Lin, and C.-S. Chen. Supervised learning of semantics-preserving hash via deep convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.

  3. S. Huang, Y. Xiong, Y. Zhang, and J. Wang. Unsupervised triplet hashing for fast image retrieval. arXiv preprint arXiv:1702.08798, 2017.

jiunbae commented 5 years ago

Learning Compact Binary Descriptors with Unsupervised Deep Neural Networks

Abstract: Unsupervised deep learning approach called DeepBit to learn compact binary descriptor for efficient visual object matching. Enforce three criterions on binary codes which are minimal loss quantization, evenly distributed codes and uncorrelated bits.

References

Reduce the computational complexity, several lightweight binary descriptors have been recently proposed such

  1. M. Calonder, V. Lepetit, C. Strecha, and P. Fua. Brief: Binary robust independent elementary features. In Proc. ECCV, 2010.

  2. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. Orb: an efficient alternative to sift or surf. In Proc. ICCV, 2011.

  3. S. Leutenegger, M. Chli, and R. Y. Siegwart. Brisk: Binary robust invariant scalable keypoints. In Proc. ICCV, 2011.

  4. A. Alahi, R. Ortiz, and P. Vandergheynst. Freak: Fast retina keypoint. In Proc. CVPR, 2012.

supervised approaches: 3, 9, 38, 39, 41, 50, 53

  1. V. Balntas, L. Tang, and K. Mikolajczyk. Bold-binary online learned descriptor for efficient image matching. In Proc. CVPR, 2015.

  2. B. Fan, Q. Kong, T. Trzcinski, Z. Wang, C. Pan, and P. Fua. Receptive fields selection for binary feature description. IEEE Trans. Image Proc., 23(6):2583–2595, 2014.

  3. C. Strecha, A. M. Bronstein, M. M. Bronstein, and P. Fua. Ldahash: Improved matching with smaller descriptors. IEEE Trans. Pattern Anal. Mach. Intell., 34(1):66–78, 2012.

  4. T. Trzcinski, M. Christoudias, P. Fua, and V. Lepetit. Boosting binary keypoint descriptors. In Proc. CVPR, 2013.

  5. T. Trzcinski and V. Lepetit. Efficient discriminative projections for compact binary descriptors. In Proc. ECCV, 2012.

  6. X. Yang and K.-T. Cheng. Ldb: An ultra-fast feature for scalable augmented reality on mobile devices. In Proc. ISMAR, 2012

  7. S. Zhang, Q. Tian, Q. Huang, W. Gao, and Y. Rui. Usb: ultrashort binary descriptor for fast visual matching and retrieval. IEEE Trans. Image Proc., 23(8):3671–3683, 2014.

ㄸncodes the desired similarity relationships and learns a project matrix to compute discriminative binary features.

  1. T. Trzcinski and V. Lepetit. Efficient discriminative projections for compact binary descriptors. In Proc. ECCV, 2012.

Local Difference Binary (LDB) applies Adaboost to select optimal sampling pairs.: 50, 51

  1. X. Yang and K.-T. Cheng. Ldb: An ultra-fast feature for scalable augmented reality on mobile devices. In Proc. ISMAR, 2012

  2. X. Yang and K.-T. Cheng. Local difference binary for ultrafast and distinctive feature description. IEEE Trans. Pattern Anal. Mach. Intell., 36(1):188–194, 2014.

Linear Discriminat Analysis (LDA) is also applied to learn binary descriptors; 14, 38

  1. Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916–2929, 2013.

  2. C. Strecha, A. M. Bronstein, M. M. Bronstein, and P. Fua. Ldahash: Improved matching with smaller descriptors. IEEE Trans. Pattern Anal. Mach. Intell., 34(1):66–78, 2012.

Recently proposed BinBoost learns a set of projection matrix using the boosting algorithm. Their success is mainly attributed to pair-wise learning with similarity labels, and is unfavorable for the case when transferring the binary descriptor to a new task.: 39, 40

  1. T. Trzcinski, M. Christoudias, P. Fua, and V. Lepetit. Boosting binary keypoint descriptors. In Proc. CVPR, 2013.
  2. T. Trzcinski, M. Christoudias, and V. Lepetit. Learning image descriptors with boosting. IEEE Trans. Pattern Anal. Mach. Intell., 37(3):597–610, 2015.

Unsupervised hashing algorithms learn compact binary descriptors whose distance is correlated to the similarity relationship of the original input data.: 2, 14, 34, 46

  1. A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Proc. FOCS, 2006.

  2. Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916–2929, 2013.

  3. R. Salakhutdinov and G. E. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969–978, 2009.

  4. Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Proc. NIPS, 2008.

Locality Sensitive Hashing (LSH) applies random projections to map original data into a low-dimensional feature space, and then performs a binarization.

  1. A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Proc. FOCS, 2006.

Semantic hashing (SH) builds a multi-layers Restricted Boltzmann Machines (RBM) to learn compact binary codes for text and documents.

  1. R. Salakhutdinov and G. E. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969–978, 2009.

Spectral hashing (SpeH) generates efficient binary codes by spectral graph partitioning.

  1. Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Proc. NIPS, 2008.

Iterative qauntization (ITQ) uses iterative optimization strategy to find projections with minimal binarization loss.

  1. Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916–2929, 2013.

Deep CNN to learn a set of hash functions, but they require pair-wised similarity labels or triplets training data.

  1. R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retreieval via image representation learning. In Proc. AAAI, 2014.

  2. H. Lai, Y. Pan, Y. Liu, and S. Yan. Simultaneous feature learning and hash coding with deep neural networks. In Proc. CVPR, 2015.

Constructs hash functions as a latent layer in the deep CNN. but their method belongs to supervised learning.

  1. H.-F. Yang, K. Lin, and C.-S. Chen. Supervised learning of semantics-preserving hashing via deep neural networks for large-scale image search. arXiv preprint arXiv:1507.00101, 2015.

Builds three layers hierarchical neural networks to learn discriminative projection matrix, but their method does not take the advantage of deep transfer learning, thus makes the binary codes less effective.

  1. V. E. Liong, J. Lu, G. Wang, P. Moulin, and J. Zhou. Deep hashing for compact binary codes learning. In Proc. CVPR, 2015.
jiunbae commented 5 years ago

UNSUPERVISED TRIPLET HASHING FOR FAST IMAGE RETRIEVAL

Abstract: CNN based unsupervised hashing designed under three principles

  1. discriminative representations for image retrieval
  2. minimum quantization loss between original feature descriptors and learned hash code
  3. maximum information entropy for learned hash

References

CNN-based hashing methods: 4, 5, 6, 7, 8, 9, 10

  1. Kevin Lin, Huei-Fang Yang, Jen-Hao Hsiao, and ChuSong Chen, “Deep learning of binary hash codes for fast image retrieval,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 27–35.
  2. Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, and Shuicheng Yan, “Supervised hashing for image retrieval via image representation learning.,” in AAAI, 2014, vol. 1, p. 2.
  3. Jie Lin, Olivier Morere, Vijay Chandrasekhar, Antoine Veillard, and Hanlin Goh, “Deephash: Getting regularization, depth and fine-tuning right,” arXiv preprint arXiv:1501.04711, 2015.
  4. Jun Wang, Sanjiv Kumar, and Shih-Fu Chang, “Semisupervised hashing for large-scale search,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 12, pp. 2393–2406, 2012.
  5. Ruslan Salakhutdinov and Geoffrey Hinton, “Semantic hashing,” International Journal of Approximate Reasoning, vol. 50, no. 7, pp. 969–978, 2009.
  6. Jie Lin, Olivier Morere, Julie Petta, Vijay Chandrasekhar, and Antoine Veillard, “Tiny descriptors for image retrieval with unsupervised triplet hashing,” arXiv preprint arXiv:1511.03055, 2015.
  7. Kevin Lin, Jiwen Lu, Chu-Song Chen, and Jie Zhou, “Learning compact binary descriptors with unsupervised deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1183–1192.

stacked Restricted Boltzmann Machines (RBMs) to encode binary codes

  1. Ruslan Salakhutdinov and Geoffrey Hinton, “Semantic hashing,” International Journal of Approximate Reasoning, vol. 50, no. 7, pp. 969–978, 2009.
  2. Jie Lin, Olivier Morere, Julie Petta, Vijay Chandrasekhar, and Antoine Veillard, “Tiny descriptors for image retrieval with unsupervised triplet hashing,” arXiv preprint arXiv:1511.03055, 2015.

replace rotation invariance loss in DeepBit

  1. Kevin Lin, Jiwen Lu, Chu-Song Chen, and Jie Zhou, “Learning compact binary descriptors with unsupervised deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1183–1192.

Supervised hashing: 4, 5, 6, 11, 12

  1. hidden layer to learn binary hash code using softmax layer on the top. Kevin Lin, Huei-Fang Yang, Jen-Hao Hsiao, and ChuSong Chen, “Deep learning of binary hash codes for fast image retrieval,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 27–35.

  2. adopt a learns binary codes for all training data and learn hash functions basis of the learned codes. Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, and Shuicheng Yan, “Supervised hashing for image retrieval via image representation learning.,” in AAAI, 2014, vol. 1, p. 2.

  3. introduce hashing scheme based on stacked RBMs and Siamese Net, stacked RBMs are learn initial parameters and fine tuned through a Siamese Net Jie Lin, Olivier Morere, Vijay Chandrasekhar, Antoine Veillard, and Hanlin Goh, “Deephash: Getting regularization, depth and fine-tuning right,” arXiv preprint arXiv:1501.04711, 2015.

  4. use a triplet loss function to minimize the Hamming distance between neighbor pairs while preserving relative similarity for non-neighbor pairs with relaxed empirical penalty. Viet-Anh Nguyen and Minh N Do, “Deep learning based supervised hashing for efficient image retrieval,” in Multimedia and Expo (ICME), 2016 IEEE International Conference on. IEEE, 2016, pp. 1–6.

  5. divide and encode module to divide intermediate image features into multiple branches, encoded into one hash bit and use triplet loss to finetune. Hanjiang Lai, Yan Pan, Ye Liu, and Shuicheng Yan, “Simultaneous feature learning and hash coding with deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3270–3278.

Semi-Supervised hashing: 7

  1. learn minimizing variance and independence of hash codes over the labeled and unlabeled data. Jun Wang, Sanjiv Kumar, and Shih-Fu Chang, “Semisupervised hashing for large-scale search,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 12, pp. 2393–2406, 2012.

Unsupervised hashing: 1, 2, 3, 8, 9, 10

  1. hand crafted image features, proposed local sensitive hashing using random projections to construct hash functions. Sariel Har-Peled, Piotr Indyk, and Rajeev Motwani, “Approximate nearest neighbor: Towards removing the curse of dimensionality.,” Theory of computing, vol. 8, no. 1, pp. 321–350, 2012.

  2. hand crafted image features, Iterative Quantization performs PCA and then learns a rotation to minimize the quantization error of mapping transformed data . Yunchao Gong and Svetlana Lazebnik, “Iterative quantization: A procrustean approach to learning binary codes,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 817–824.

  3. hand crafted image features Yair Weiss, Antonio Torralba, and Rob Fergus, “Spectral hashing,” in Advances in neural information processing systems, 2009, pp. 1753–1760.

  4. semantic hashing, use RBM as auto-encoder to generate binary codes Ruslan Salakhutdinov and Geoffrey Hinton, “Semantic hashing,” International Journal of Approximate Reasoning, vol. 50, no. 7, pp. 969–978, 2009.

  5. Jie Lin, Olivier Morere, Julie Petta, Vijay Chandrasekhar, and Antoine Veillard, “Tiny descriptors for image retrieval with unsupervised triplet hashing,” arXiv preprint arXiv:1511.03055, 2015.

  6. DeepBit to learn nonlinear mapping function by inserting a latent layer, construct pair-wise training data by combining the original images Kevin Lin, Jiwen Lu, Chu-Song Chen, and Jie Zhou, “Learning compact binary descriptors with unsupervised deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1183–1192.

Deep Learning

  1. straightforward CNN based hashing method, quantize the activations of a FC layer with threshold 0 and take binary result as hash codes Jinma Guo and Jianmin Li, “Cnn based hashing for image retrieval,” arXiv preprint arXiv:1509.01354, 2015.
  2. framework to learn binary codes by seeking multiple hiereachical non-linear transformation.Venice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, and Jie Zhou, “Deep hashing for compact binary codes learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2475–2483.
  3. co-training hashing network by jointly learning projections from image representations to hash codes and classificationTing Yao, Fuchen Long, Tao Mei, and Yong Rui, “Deep semantic-preserving and ranking-based hashing for image retrieval,” CVPR, 2015.
jiunbae commented 5 years ago

Unsupervised Learning of Discriminative Attributes and Visual Representations

Abstract: train CNN coupled with unsupervised discriminative clustering and use the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability.

References

Supervised attribute learning methods: 7, 16, 28, 48, require large label.

  1. S. Branson, C. Wah, F. Schroff, B. Babenko, P. Welinder, P. Perona, and S. Belongie. Visual recognition with humans in the loop. In ECCV, 2010.

  2. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.

  3. C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009.

  4. Y. Wang and G. Mori. A discriminative latent model of object classes and attributes. In ECCV, 2010.

drawback by mining attributes from image features to reduce inter category confusions: 33, 35, 37

  1. S. Ma, S. Sclaroff, and N. Ikizler-Cinbis. Unsupervised learning of discriminative relative visual attributes. In ECCVW, 2012.

  2. D. Parikh and K. Grauman. Interactively building a discriminative vocabulary of nameable attributes. In CVPR, 2011.

  3. M. Rastegari, A. Farhadi, and D. Forsyth. Attribute discovery via predictable discriminative binary codes. In ECCV, 2012.

Supervised attribute learning: 7, 16, 28, 48

  1. S. Branson, C. Wah, F. Schroff, B. Babenko, P. Welinder, P. Perona, and S. Belongie. Visual recognition with humans in the loop. In ECCV, 2010.

  2. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.

  3. C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009.

  4. Y. Wang and G. Mori. A discriminative latent model of object classes and attributes. In ECCV, 2010.

Unsupervised learning 11, 29, 44, 4, 5, 6, 24, 53

  1. C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode seeking. In NIPS, 2013.

  2. Y. Li, L. Liu, C. Shen, and A. van den Hengel. Mid-level deep pattern mining. In CVPR, 2015.

  3. S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In ECCV, 2012.

  4. L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for RGB-D based object recognition. In ISER, 2012.

  5. L. Bo, X. Ren, and D. Fox. Multipath sparse coding using hierarchical matching pursuit. In CVPR, 2013.

  6. Y.-L. Boureau, N. Le Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: Multi-way local pooling for image recognition. In ICCV, 2011.

  7. K. Y. Hui. Direct modeling of complex invariances for visual object features. In ICML, 2013.

  8. W. Zou, S. Zhu, K. Yu, and A. Y. Ng. Deep learning of invariant features via simulated fixations in video. In NIPS, 2012.

Deep hashing methods: 14, (27, 31, 51, 52, supervised)

  1. V. Erin Liong, J. Lu, G. Wang, P. Moulin, and J. Zhou. Deep hashing for compact binary codes learning. In CVPR, 2015.

  2. H. Lai, Y. Pan, and S. Yan. Simultaneous feature learning and hash coding with deep neural networks. In CVPR, 2015.

  3. J. Lin, O. Morere, J. Petta, V. Chandrasekhar, and A. Veil- ` lard. Tiny descriptors for image retrieval with unsupervised triplet hashing. arXiv preprint, arXiv:1511.03055v1, 2015.

  4. R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retrieval via image representation learning. In AAAI, 2014.

  5. F. Zhao, Y. Huang, L. Wang, and T. Tan. Deep semantic ranking based hashing for multi-label image retrieval. In CVPR, 2015.

jiunbae commented 5 years ago

Stochastic Generative Hashing

Abstract: learn hash functions through Minimum Description Length principle such that learned hash codes maximally compress dataset and used to regenerate inputs. Also develop learning algorithm based on stochastic distributional gradient, avoid notorious difficulty caused by binary output constraints, to jointly optimize parameters of hash function and associated generative model.

References

Hamming search over binary codes

Semi supervised hashing

optimize some objective function that captures the preferred properties of the hash function in a supervised or unsupervised fashion

jiunbae commented 5 years ago

Spherical Hashing

Abstract: hypersphere-based hashing function, spherical hashing, to map spatially coherent data points into binary code compared to hyperplane-based hashing functions.

References

high-dimensional data points hashing techniques: 11, 23, 26

  1. P. Indyk and R. Motwani. Approximate nearest neighbors: toward removing the curse of dimensionality. In STOC, 1998.

  2. A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In CVPR, 2008.

  3. Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008.

Semi supervised hashing

  1. J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for scalable image retrieval. In CVPR, 2010.

spectral hashing

  1. Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008.

iterative quantization

  1. Y. Gong and S. Lazebnik. Iterative quantization: a procrustean approach to learning binary codes. In CVPR, 2011.

joint optimization

  1. J. He, R. Radhakrishnan, S.-F. Chang, and C. Bauer. Compact hashing with joint optimization of search accuracy and time. In CVPR, 2011.

random maximum margin hashing

  1. A. Joly and O. Buisson. Random maximum margin hashing. In CVPR, 2011.

bag of visual words representation

  1. J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In ICCV, 2003.

GIST descriptor

  1. A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 2001.

finding nearest neighbor points in high dimensional image descriptor spaces.

  1. B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In ICCV, 2009.

tree based methods: 6, 16, 18

  1. J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM TOMS, 3(3):209–226, 1977.

  2. K. Kim, M. K. Hasan, J.-P. Heo, Y.-W. Tai, and S.-E. Yoon. Probabilistic cost model for nearest neighbor search in image retrieval. Technical report, KAIST, 2012.

  3. D. Nister and H. Stew ´ enius. Scalable recognition with a vo- ´ cabulary tree. In CVPR, 2006.

Binary hashing methods: 11, 13, 25, 7, 10, 15, 22

  1. Y. Gong and S. Lazebnik. Iterative quantization: a procrustean approach to learning binary codes. In CVPR, 2011.

  2. J. He, R. Radhakrishnan, S.-F. Chang, and C. Bauer. Compact hashing with joint optimization of search accuracy and time. In CVPR, 2011.

  3. P. Indyk and R. Motwani. Approximate nearest neighbors: toward removing the curse of dimensionality. In STOC, 1998.

  4. P. Jain, B. Kulis, and K. Grauman. Fast image search for learned metrics. In CVPR, 2008.

  5. A. Joly and O. Buisson. Random maximum margin hashing. In CVPR, 2011.

  6. K. Terasawa and Y. Tanaka. Spherical lsh for approximate nearest neighbor search on unit hypersphere. In Algorithms and Data Structures, volume 4619, pages 27–38, 2007.

  7. J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for scalable image retrieval. In CVPR, 2010.

Distance based Indexing Methods: 5, 12, 24

  1. R. F. S. Filho, A. Traina, C. J. Traina., and C. Faloutsos. Similarity search without tears: the omni-family of all-purpose access methods. In Int’l Conf. on Data Engineering, 2001.

  2. H. V. Jagadish, B. C. Ooi, K.-L. Tan, C. Yu, and R. Zhang. idistance: An adaptive b +-tree based indexing method for nearest neighbor search. ACM T. on Database Systems, 2005.

  3. J. Venkateswaran, D. Lachwani, T. Kahveci, and C. Jermaine. Reference-based indexing of sequence databases. In VLDB, 2006.

jiunbae commented 5 years ago

Semantic Structure-based Unsupervised Deep Hashing

Abstract: Semantic Structurebased unsupervised Deep Hashing (SSDH), construct semantic structure by considering points with distances obviously smaller than others as semantically dissimilar.

References

Local Sensitive Hashing

learn hash functions from data distributions, usually perform well with shorter binary codes

learning hash codes under supervised settings

Unsupervised hashing methods

Deep learning