Open minh-nguyenhoang opened 3 weeks ago
The idea is really cool, I must admit, but this will make the HC infeasible for high resolution inpainting, as it exhibit the quadratic memory complexity. Have you thought of any idea on improving the performance of this layer?
Yes, it turns out to be a variation of attention layer with normalized convolution. The biggest difference might be how we get the matrices.
Currently, there is no plan to improve this layer but we can discuss if you have some ideas.
If I'm not wrong, hypergraph convolution (HC) seem really like self-attention mechanism (or earlier, non-local network) but with different score function and aggregating function, right? Basically, the score calculation of HC is done in a transformed space (with the diagonal covariance matrix) and the aggregation is done with different scaling function (not softmax).