Closed bokveizen closed 2 years ago
Hi, Thank you so much for paying attention to our work. We just went through the outstanding concurrent paper Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions Karalias et al (Neural SFE)[1]. We greatly appreciate the inspiring knowledge from Neural SFE. We summarize the similarities and essential difference between Neural SFE and our work as a response:
Similarities: Both Neural SFE and our work aims to do relaxation of functions from discrete space to continuous space. The purpose behind the relaxation in both works is to provide neural networks with tractable gradient computation in discrete problems.
Essential Differences: Neural SFE builds on a much broader and general relaxation target than our work: it works on relaxing almost all the set functions (that take discrete sets as input) with linear programming and semi-definite programming (SDP), such that the set functions could work with deep learning. Due to its broad application scene, it could work on not only binary unsupervised combinatorial optimization (CO) but also many other tasks as long as they are involved with set functions. It could potentially work on non-binary CO, and even in other supervised learning tasks (e.g. taking the training error as the objective in supervised image classification tasks, see Sec 5.3 in the paper).
Our relaxation-and-rounding principle is greatly inspired by another outstanding work by Karalias & Loukas[2] and focuses on relaxing the optimization objectives (that take the assignment of optimization variables as input) in CO with binary optimization variables. To some extent, we focus on a sub-problem of their relaxation target. And because we work on a more specific relaxation problem (CO with binary optimization variables), our entry-wise concave / affine principle and O(n) deterministic rounding process could thus further provide a performance guarantee for neural networks trained with the relaxed objectives. Also, our principle could naturally further extend into binary proxy-CO problems.
[1]Karalias N, Robinson J, Loukas A, et al. Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions[J]. arXiv preprint arXiv:2208.04055, 2022. [2]Karalias N, Loukas A. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs[J]. Advances in Neural Information Processing Systems, 2020, 33: 6659-6672.
Thanks a lot for your reply! Best wishes on your research!
Dear authors, could you please check this work https://arxiv.org/abs/2208.04055 which looks quite similar to yours? Karalias, Nikolaos, et al. "Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions." arXiv preprint arXiv:2208.04055 (2022). If there are indeed essential differences and it is possible, could you please explain the essential differences?