Open hzhangSDU opened 6 months ago
Hello,
Thank you for raising this. It is true that we can normalize all coordinates in this way. However, if the original coordinates or PDE system is not non-dimensionalized, this will essentially result in an ill-posed Jacobian matrix being multiplied with the gradients during backpropagation. This will eventually hurt the training process.
In contrast, after non-dimensionalization, while we can constrain all input coordinates to the range [0, 1], it would still affect the gradients but not too much.
Hope this helps.
Hello, I have a question about Non-dimensionalized to ask you.
The following is the original content of your paper.
It is well-known that, data normalization is an important pre-processing step in traditional deep learning, which typically involves scaling the input features of a data-set so that they have similar magnitudes and ranges [55, 56]. However, this process may not be generally applicable for PINNs as the target solutions are typically not available when solving forward PDE problems. In such cases, it is important to ensure that the target output variables vary within a reasonable range. One way to achieve this is through non-dimensionalization.
However, in the process of learning the sample code of
ns_unsteady_cylinder
, I found that the coords data (x,y) is still normalized before entering the network. So, what is the significance of the dimensionless coords data carried out earlier? In fact, after normalization, (x,y,t) all fall within the interval [0,1].Looking forward to your reply, thanks.