Closed asdasdqwdqwfsdf closed 2 years ago
What do you mean by it's not correct? How does the loss convergence look? Did you try other neural networks? That's a small neural network. Etc.
1.) What do you mean by it's not correct?
Answer: Run this model using "NeuralPDE.jl", and check the analytical result / numerical result from [https://arxiv.org/pdf/2007.04542.pdf], then you will know that the result from "NeuralPDE.jl" is totally wrong.
2.) How does the loss convergence look?
Answer: the loss convergence looks very good, but the results is totally wrong. Again, you may just check the analytical result / numerical result from [https://arxiv.org/pdf/2007.04542.pdf].
3.) Did you try other neural networks? That's a small neural network. Etc.
Answer: I have extend as follows: FastChain(FastDense(2,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,1))
Its loss convergence looks very good, but the result is totally wrong. Again, we can check the analytical result / numerical result from [https://arxiv.org/pdf/2007.04542.pdf].
As mentioned in the paper [https://arxiv.org/pdf/2007.04542.pdf], we have to use some strategies to improve the accuracy of the physics informed neuralnetworks, for example, this example!
Now come to the original question, how can we choose the deep learning parameters for a stable & correct solution using "NeuralPDE.jl"? for example, this example!
First of all, what is the validation you used to ensure you implemented the equation you thought you did? How did you test it?
Answer: Run this model using "NeuralPDE.jl", and check the analytical result / numerical result from [https://arxiv.org/pdf/2007.04542.pdf], then you will know that the result from "NeuralPDE.jl" is totally wrong.
I didn't run the code and would like you (or someone else) to share what some of the results look like, or else I won't be able to debug it for a very long time.
Answer: the loss convergence looks very good
That's not an answer. An answer to that would be a plot.
Answer: I have extend as follows: FastChain(FastDense(2,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,1))
That's a bad idea. Deep nets with small intermediate layers both don't parallelize well and do not satisfy universal approximation.
opt = Optim.BFGS()
You used pure BFGS. That's a bad idea without setting allow_increases=true. You will hit a local optimum. Did you try doing it will ADAM and then finishing with BFGS?
Wait, why was this closed?
This is most likely a user issue, but I think it's fine to keep it open and turn it into a tutorial. @KirillZubov can you want to investigate this model over the next few weeks?
This is most likely a user issue, but I think it's fine to keep it open and turn it into a tutorial. @KirillZubov can you want to investigate this model over the next few weeks? yeah, sure
This works now with the correct training setup.
Hi @ChrisRackauckas and @KirillZubov
I am trying to solve a nonlinear second-order boundary value problem in "NeuralPDE.jl", the numerical result is still NOT correct.
The nonlinear second-order model in "NeuralPDE.jl" looks like:
Therefore, for nonlinear second-order boundary value problems, how can we choose the parameters for a stable & correct solution using "NeuralPDE.jl"?