Open gokhalen opened 7 months ago
Hi Nachiket, Thank you for the question and sorry for the delayed response. Yes your understanding is correct. For your questions
@zongyi-li
Thank you very much for your response. Still not clear about point 2, but let me think about it. Perhaps it will be useful for me to take a look at the code.
@zongyi-li
I'm reading your paper on learned deformations.
Could you please check if my following understanding is correct?
In Geo-FNO, the input mesh is regarded as coming from some probability distribution. By sampling this probability distribution, we generate training data on different meshes. The neural network $
\phi^{-1}_a
$ learns to map these sampled meshes into a latent uniform space. When we encounter a new mesh, we use the learned neural network $\phi^{-1}_{a}
$ to approximately map the new mesh into a uniform grid in latent space, where standard FNO operates and then we map the solution back into the physical domain. Since the mapping to the latent space $\phi^{-1}_{a}$ is not perfect, this may be a source of (small) error.Also, I have the following questions:
1) In equation (12) is $|\mathcal{T}^i|$ the volume/area of the mesh? Why is it in the denominator? Why is it necessary while going from (11) to (12) by approximating the integral? A simple approximation of the integral wouldn't have it in the denominator...
2) What exactly is $
\rho_a(x)
$?3) I'm looking at the definition of $
\phi^{-1}_a
$ here and it doesn't seem that anything special is done to make sure that the output of $\phi^{-1}_a
$ is uniform. It seems to learn to produce uniform output as a result of training. Is this correct?Thanks,
-Nachiket