Closed whtwu closed 2 years ago
Have you checked FAQ "Q: I failed to train the network or get the right solution, e.g., large training loss, unbalanced losses."
Thanks for your reply. Absoulately yes, I have checked FAQ and other issues. and the time domain is (0,1), the result gets better as the closer t is to 1.
@lululxvi ,Looking forward to your suggestion.
You may try hard constraints for the BC.
Is there any update on this issue? I want to know if the results improved when using hard constraints for BC.
I'm having the same problem, and I did not resolve to use BC because I found it hard to represent my initial and boundary conditions with hard constraints without negatively impacting my results.
Looking forward to completing this issue discussion!
This is my BC:
In the above code, I used the hard constraint to satisfy the Dirichlet BC,,
net.apply_output_transform(lambda x, y: x[:,0:1] * (1 - x[:,0:1]) * x[:,1:] * y)
but I don't know how to satisfy the OperatorBC at the same time.
Hello, @lululxvi , I have tried many ways to optimize the model ,unfortunately, I cant't get a more accurate result, I would be grateful if there are any suggestions for me ,thanks a lot!
`import deepxde as dde import numpy as np from deepxde.backend import tf import matplotlib.pyplot as plt
def main(): b = 0.06 h = 0.002 e = 2e+11 i = 4 * pow(10, -11) rho = 7850 s = 12e-05 l = 1 f = 10 omiga = 1
if name == "main": main() ` Step Train loss Test loss Test metric
0 [1.14e+01, 0.00e+00, 3.05e-02, 0.00e+00, 1.34e+00] [8.76e+00, 0.00e+00, 3.05e-02, 0.00e+00, 1.34e+00] [7.24e+00]
2000 [2.36e-04, 0.00e+00, 1.88e-04, 0.00e+00, 2.12e-04] [1.68e-04, 0.00e+00, 1.88e-04, 0.00e+00, 2.12e-04] [6.91e-02]
4000 [1.05e-04, 0.00e+00, 5.39e-05, 0.00e+00, 2.02e-05] [8.56e-05, 0.00e+00, 5.39e-05, 0.00e+00, 2.02e-05] [6.07e-02]
6000 [4.67e-04, 0.00e+00, 3.83e-05, 0.00e+00, 2.52e-05] [3.57e-04, 0.00e+00, 3.83e-05, 0.00e+00, 2.52e-05] [6.91e-02]
8000 [5.68e-05, 0.00e+00, 1.58e-05, 0.00e+00, 1.69e-06] [4.85e-05, 0.00e+00, 1.58e-05, 0.00e+00, 1.69e-06] [6.67e-02]
10000 [6.12e-05, 0.00e+00, 1.81e-05, 0.00e+00, 1.54e-06] [4.97e-05, 0.00e+00, 1.81e-05, 0.00e+00, 1.54e-06] [6.67e-02]
12000 [2.41e-04, 0.00e+00, 2.88e-05, 0.00e+00, 1.84e-05] [2.42e-04, 0.00e+00, 2.88e-05, 0.00e+00, 1.84e-05] [9.14e-02]
14000 [4.24e-05, 0.00e+00, 1.62e-05, 0.00e+00, 3.11e-06] [3.57e-05, 0.00e+00, 1.62e-05, 0.00e+00, 3.11e-06] [6.79e-02]
16000 [3.89e-05, 0.00e+00, 1.60e-05, 0.00e+00, 3.17e-06] [3.30e-05, 0.00e+00, 1.60e-05, 0.00e+00, 3.17e-06] [6.78e-02]
18000 [2.43e-04, 0.00e+00, 3.28e-05, 0.00e+00, 9.17e-05] [2.15e-04, 0.00e+00, 3.28e-05, 0.00e+00, 9.17e-05] [1.21e-01]
20000 [3.43e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [2.92e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [6.75e-02]
Best model at step 20000: train loss: 5.21e-05 test loss: 4.70e-05 test metric: [6.75e-02]
'train' took 786.180204 s
Compiling model... 'compile' took 4.721393 s
Training model...
Step Train loss Test loss Test metric
20000 [3.43e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [2.92e-05, 0.00e+00, 1.44e-05, 0.00e+00, 3.45e-06] [6.75e-02]
21000 [5.80e-06, 0.00e+00, 3.88e-06, 0.00e+00, 2.78e-06]
21915 [4.72e-06, 0.00e+00, 2.21e-06, 0.00e+00, 1.93e-06] [3.49e-06, 0.00e+00, 2.21e-06, 0.00e+00, 1.93e-06] [5.94e-02]
Best model at step 21915: train loss: 8.86e-06 test loss: 7.63e-06 test metric: [5.94e-02]