Open gfkri opened 1 month ago
Thank you for your interests.
In the paper, $\mathcal G(\cdot)$ is indeed represented as Dynamical_F
in the code, as we adjusted the notation in the final version of the paper, which led to some discrepancies among the symbols in the code and the paper.
However, I'm sorry I didn't understand what your question was?
What I meant that you pass u_t
to dynamical_F
and the result is again subtracted from u_t
to compute your PDE loss (f
in this snipped below).
F = self.dynamical_F(torch.cat([xt,u,u_x,u_t],dim=1))
f = u_t - F
Without any other loss preventing it from doing so, the MLP could simply learn to pass through u_t
(since you use a sine in your MLP, it only needs to learn to reverse that, but that's it).
This would result in the PDE loss having no effect.
Is that prevented somewhere?
Thank you very much. Best regards, Georg
I have the same question as @gfkri , it would be very helpful if someone could explain it. Should we remove the ut as the input features to dynamical_F?
Best, Chen
Hi:)
I have a question about your PDE loss. In your model, you pass u_t to G (dynamical_F in you code, right?), then your loss is computed as a difference to u_t. dynamical_F/G does not have any other supervision, right? What hinders it from simply passing through u_t, invalidating the loss altogether?
Thank you very much in advance. Best regards, Georg