I'm just curious. This is coming from someone with a naive understanding of your methodology. I was reading through the STGCN paper, and came across your method. Seems like parameter reduction and filter localization is done through restriction of the kernel to a polynomial.
Is it possible, in anyway, to replace this filter with, and again I am a bit naive here, with a neural ordinary differential equation? That is, representing the filter as a diffeq rather than a polynomial and learn those set of diffeq parameters? Would such parameters help reduce the complexity of the model?
This is a point of curiosity, just looking for enlightenment. If this is possibly, than it seems the neural ODE framework could be extended to GCN.
I'm just curious. This is coming from someone with a naive understanding of your methodology. I was reading through the STGCN paper, and came across your method. Seems like parameter reduction and filter localization is done through restriction of the kernel to a polynomial.
Is it possible, in anyway, to replace this filter with, and again I am a bit naive here, with a neural ordinary differential equation? That is, representing the filter as a diffeq rather than a polynomial and learn those set of diffeq parameters? Would such parameters help reduce the complexity of the model?
This is a point of curiosity, just looking for enlightenment. If this is possibly, than it seems the neural ODE framework could be extended to GCN.