NeuroDiffGym / neurodiffeq

A library for solving differential equations using neural networks based on PyTorch, used by multiple research groups around the world, including at Harvard IACS.
http://pypi.org/project/neurodiffeq/
MIT License
699 stars 89 forks source link

Oscillator equation in large domain #90

Closed Arup-nit closed 3 years ago

Arup-nit commented 3 years ago

Dear Liu, I am solving oscillator equation in the large domain, but not getting the proper result. in particular I am trying to solve y'' +0.2(y^2 -1) y' -y +y^3 = 0.53 cos x y(0) =0.1 y'(0) =-0.2 x_min=0.0, x_max=50.0

shuheng-liu commented 3 years ago

It's a known problem likely due to PyTorch initializing NN weights to [-1, 1] by default. There are two ways to try

  1. initializing the NN weights to [-50, 50] instead of [-1, 1]; or
  2. use a variable substitution t = x/50 and rewrite your PDE w.r.t. y and t.
Arup-nit commented 3 years ago

Dear Liu, Getting better result but not best , for above problem Here is the code, Correct my code plz if possible

`ode = lambda x, t: ((0.0004) diff(x,t,order =2)) +((0.004)(x*2-1) diff(x,t))-x+x*3- (0.53 (torch.cos(50*t))) t_min,t_max = 0.0,1.0 N=300 fcnn = FCNN (hidden_units=(50,), actv=nn.Tanh) adam = optim.Adam(fcnn.parameters(), lr=0.001) init_ode = IVP(t_0= t_min, x_0=0.1,x_0_prime=-0.2 )
train_gen = ExampleGenerator(N, t_min= t_min, t_max= t_max, method="equally-spaced-noisy")

solution,loss_history = solve( ode=ode, condition=init_ode, train_generator=train_gen, t_min=t_min, t_max=t_max, net=fcnn, batch_size=N, max_epochs=5000, optimizer=adam, monitor=Monitor(t_min= t_min, t_max= t_max, check_every=100), ) ts = np.linspace(0, 1.0, 11) x_ANN = solution(ts, as_type='np')`

x_ANN

array([ 0.1 , 0.06203723, 0.01820176, -0.00698296, -0.0123547 , -0.00632258, 0.00297451, 0.0102482 , 0.01257595, 0.00855986, -0.00222801])

shuheng-liu commented 3 years ago

Can you plot the loss history and see if the loss converges after 5000 epochs? If it has converged, I'd recommend using a more complex network. Since there's a nonlinear term x**3 in the equation, a single hidden layer with 50 hidden units might be insufficient.

Arup-nit commented 3 years ago

I built a network with 2 and another with 3 hidden layers with 50 hidden units each, but not sufficient. Sorry, It may be easy but I don't know how to plot loss history. Any other way to fix it?

shuheng-liu commented 3 years ago

For starters, if you are working with jupyter notebooks, try

%matplotlib notebook
import matplotlib.pyplot as plt

...
solution, loss_history = solve(...)

plt.figure()
for key, values in loss_history.items():
    plt.plot(values, label=key)
plt.yscale('log')
plt.legend()

A better way would be to use monitors, you can find more instructions found in this documentation page. Search for monitor with your browser and you'll see its usage.

Note that you must use the %matplotlib notebook (not %matplotlib inline) if you are working with jupyter notebooks.

shuheng-liu commented 3 years ago

Another way you can try is to rewrite this 2nd order ODE as a system of first-order ODEs by introducing a new variable z=y', I'm not sure if it helps but it's worth trying.

Arup-nit commented 3 years ago

Can you plot the loss history and see if the loss converges after 5000 epochs? If it has converged, I'd recommend using a more complex network. Since there's a nonlinear term x**3 in the equation, a single hidden layer with 50 hidden units might be insufficient.

download

Arup-nit commented 3 years ago

Another way you can try is to rewrite this 2nd order ODE as a system of first-order ODEs by introducing a new variable z=y', I'm not sure if it helps but it's worth trying.

not working

shuheng-liu commented 3 years ago

From the loss plot, It looks like it's not converging yet. If you keep training, the loss should continue to go down.