Hi,
I enjoy the series of papers published your group about Gaussain activation function applied in implicit neural representation.
I hope to say thank you guys for the hard working.
I really appreciate that your work may open another horizon for INR research, except the hyperparameter $\sigma$
The hyperparameter $\sigma$, which controls the Band Width of the signals outputed from Gaussian activation function, hinder me to utilize your novel idea to my next works.
Because it is important to find a proper sigma, I wonder that it is feasible to make $\sigma$ as a learnable parameter.
it may help to utilize the Gaussian activation function easily.
Is there any ablation study about the learnable $\sigma$? (e.g. it does not converge easily, etc.)
could I ask any experiments about that? Thank you for your time!
Hi, I enjoy the series of papers published your group about Gaussain activation function applied in implicit neural representation. I hope to say thank you guys for the hard working. I really appreciate that your work may open another horizon for INR research, except the hyperparameter $\sigma$
The hyperparameter $\sigma$, which controls the Band Width of the signals outputed from Gaussian activation function, hinder me to utilize your novel idea to my next works. Because it is important to find a proper sigma, I wonder that it is feasible to make $\sigma$ as a learnable parameter. it may help to utilize the Gaussian activation function easily. Is there any ablation study about the learnable $\sigma$? (e.g. it does not converge easily, etc.) could I ask any experiments about that? Thank you for your time!