Closed ViktorC closed 4 years ago
As you can see the first two losses, i.e., ODE losses, are very large. Could you try use a smaller time domain first, e.g., [0, 2]?
Thanks for the quick response.
Ah, I was under the impression that an MSE on the order of 10^-3 or 10^-4 is a small loss. What would be good values to aim for generally?
As per your suggestion, I tried it using a smaller domain with fairly good results. I then tried to expand the domain gradually while also increasing the number of training points. To be able to minimise the loss, I had to increase the depth of the network as well. Finally, using 4000 training points and 5 hidden layers, I managed to get really good results for the [0, 10] interval. Interestingly, the magnitude of the MSE was roughly the same as before (10^-4 to 10^-3) but the solution looked perfect this time around.
Usually I aim for MSE smaller than 10^-4 to achieve good accuracy.
Great, I'll use that as the target in the future. 🙂 Thank you for your help!
Hi ViktorC,
I am not an expert in this area and trying to explore this field. It would be great help if you can please explain the initial conditions you had taken for this problem.
As you can see the first two losses, i.e., ODE losses, are very large. Could you try use a smaller time domain first, e.g., [0, 2]?
Hi Lu,
Why would a smaller domain help? Is this related to the feature scaling or higher frequency?
Thanks, Haochen
It is one reason. Also related to network optimization of SGD.
Hi Lulu, first at all I want to thanks for your awesome library!! Can you please explain what does mean one of each column of the train loss output? thank you so much!
@camolina2 One column is the value of one loss term.
Hi ViktorC, Many thanks for sharing the code. I do the same you did 4000 training points and 5 hidden layers but still obtain the second graph of your post. I will be thankfull if you share the updated code.
Hi @lululxvi,
Thanks for the great library.
I have been trying to solve the Lotka-Volterra equations using DeepXDE but I can't seem to get very good results. I have tried various numbers of layers and layer sizes, different numbers of collocation points, different learning rates, etc., but none of these things seemed to have helped. Could you please help me identify what I might be doing wrong?
As you can see below, the test loss of the optimised PINN indicates good performance. However, the plotted prediction seems off.
Solution:
PINN prediction:
Do you know what the problem could be? Any pointers would be much appreciated.
Many thanks, Viktor