Closed Arup-nit closed 3 years ago
It's a known problem likely due to PyTorch initializing NN weights to [-1, 1] by default. There are two ways to try
t = x/50
and rewrite your PDE w.r.t. y
and t
.Dear Liu, Getting better result but not best , for above problem Here is the code, Correct my code plz if possible
`ode = lambda x, t: ((0.0004) diff(x,t,order =2)) +((0.004)(x*2-1) diff(x,t))-x+x*3- (0.53 (torch.cos(50*t)))
t_min,t_max = 0.0,1.0
N=300
fcnn = FCNN (hidden_units=(50,), actv=nn.Tanh)
adam = optim.Adam(fcnn.parameters(), lr=0.001)
init_ode = IVP(t_0= t_min, x_0=0.1,x_0_prime=-0.2 )
train_gen = ExampleGenerator(N, t_min= t_min, t_max= t_max, method="equally-spaced-noisy")
solution,loss_history = solve( ode=ode, condition=init_ode, train_generator=train_gen, t_min=t_min, t_max=t_max, net=fcnn, batch_size=N, max_epochs=5000, optimizer=adam, monitor=Monitor(t_min= t_min, t_max= t_max, check_every=100), ) ts = np.linspace(0, 1.0, 11) x_ANN = solution(ts, as_type='np')`
x_ANN
array([ 0.1 , 0.06203723, 0.01820176, -0.00698296, -0.0123547 , -0.00632258, 0.00297451, 0.0102482 , 0.01257595, 0.00855986, -0.00222801])
Can you plot the loss history and see if the loss converges after 5000 epochs? If it has converged, I'd recommend using a more complex network. Since there's a nonlinear term x**3
in the equation, a single hidden layer with 50 hidden units might be insufficient.
I built a network with 2 and another with 3 hidden layers with 50 hidden units each, but not sufficient. Sorry, It may be easy but I don't know how to plot loss history. Any other way to fix it?
For starters, if you are working with jupyter notebooks, try
%matplotlib notebook
import matplotlib.pyplot as plt
...
solution, loss_history = solve(...)
plt.figure()
for key, values in loss_history.items():
plt.plot(values, label=key)
plt.yscale('log')
plt.legend()
A better way would be to use monitor
s, you can find more instructions found in this documentation page. Search for monitor
with your browser and you'll see its usage.
Note that you must use the %matplotlib notebook
(not %matplotlib inline
) if you are working with jupyter notebooks.
Another way you can try is to rewrite this 2nd order ODE as a system of first-order ODEs by introducing a new variable z=y'
, I'm not sure if it helps but it's worth trying.
Can you plot the loss history and see if the loss converges after 5000 epochs? If it has converged, I'd recommend using a more complex network. Since there's a nonlinear term
x**3
in the equation, a single hidden layer with 50 hidden units might be insufficient.
Another way you can try is to rewrite this 2nd order ODE as a system of first-order ODEs by introducing a new variable
z=y'
, I'm not sure if it helps but it's worth trying.
not working
From the loss plot, It looks like it's not converging yet. If you keep training, the loss should continue to go down.
Dear Liu, I am solving oscillator equation in the large domain, but not getting the proper result. in particular I am trying to solve y'' +0.2(y^2 -1) y' -y +y^3 = 0.53 cos x y(0) =0.1 y'(0) =-0.2
x_min=0.0, x_max=50.0