Hi, I am interested in how you guys perform the nuConv without explicitly embedding the kernel and the grid into a larger matrix, as is done in FIt-SNE. I've gone through your manuscript, and all I've been able to find is a sentence
Instead of explicit embedding G2G into a circulant one as in FIt-SNE, we use an implicit approach without augmenting the grid size and its memory usage by a factor of 2^d, while maintaining the same arithmetic complexity.
I've also looked through the code as much as I could, but I'm not that familiar with C++ and can't really see what's going on.
I wasn't aware it's possible to do a convolution with a kernel of broad support without the additional 2^d zero padding. From my understanding, if we don't apply this zero padding, the kernel would "wrap-around", and contaminate the overall results. Could you please point me to some literature where I could read up on this?
Hi, I am interested in how you guys perform the nuConv without explicitly embedding the kernel and the grid into a larger matrix, as is done in FIt-SNE. I've gone through your manuscript, and all I've been able to find is a sentence
I've also looked through the code as much as I could, but I'm not that familiar with C++ and can't really see what's going on.
I wasn't aware it's possible to do a convolution with a kernel of broad support without the additional 2^d zero padding. From my understanding, if we don't apply this zero padding, the kernel would "wrap-around", and contaminate the overall results. Could you please point me to some literature where I could read up on this?