Closed shuheng-liu closed 4 years ago
Better reparameterization seems to do the trick.
A better solution can be obtained by using a Dirichlet Condition
that fixes one end of the boundary and penalizes the boundary value on the other end (by adding an additional term to the loss).
I was solving the Taylor-Couette equation, which, under mild assumptions, is simplified to this 2nd order linear ODE:
Equation Statement
The solution to the above equation should be where A and B are arbitrary constants.
Boundary Condition and Analytical Solution
Under the Dirichlet condition
y(0.1) == y(10) == 10.1
, the solution can be uniquely determined as: , which looks like this.
Expected Behaviour
As the training proceeds, the net should return a solution that first goes down until
x == 1
and go back up again whenx > 1
Actual Behaviour
However, using the following code, the network gives a solution that keeps straying away from the analytical solution.
Here is a gif file that shows how the model is performing. Note that not only does the network give a solution that looks drastically different from the analytic one, but also the solution is being scaled in the
y
-direction. The latter can be deduced by the change of the maximum value of they
-axis over time.