scaomath / galerkin-transformer

[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax for Partial Differential Equations
MIT License
214 stars 28 forks source link

More than one channel #6

Closed NicolaiLassen closed 2 years ago

NicolaiLassen commented 2 years ago

Hi,

Thank you for this great contribution!

I was just wondering what your thoughts were regarding expanding the input channels so that the models can accept multiple (x,y,z). Furthermore, the new FNO implementation has the ability to accommodate different height and width could these changes be merge with this repo?

scaomath commented 2 years ago

Yes. It is definitely okay to add more channels (but the relative position of each channel in the depth dimension matters). I will have some new code coming up soon.

As for the second question, I will find a time to clean up some left over issues in this repo. This summer I am moving and did not have a chance to double check some local changes.