LLNL / pyMMAopt

GNU General Public License v2.0
12 stars 4 forks source link

test_compliance.py fails due to tolerance assertion. assert np.allclose(final_cost_func, result, rtol=1e-5) #2

Open b17jps1 opened 2 years ago

b17jps1 commented 2 years ago

I have installed Firedrake, PETSC, etc. The pyMMAopt test_compilance.py fails because np is undefined (python code is missing import numpy as np). I then discovered the tolerance was too tight for the solution in the test_compliance.py My question, how can I resolve the compliance testing with rtol=1e-5 ? What is causing the divergence? Is it HW/SW ? Is there some user parameter I can fiddle with to effect this behavior. Could it be a compiler optimization flag, or some other setting outside of pyMMAopt? Some setting in PETSC or firedrake itself?

assert np.allclose(final_cost_func, result, rtol=1e-5)

The failure: (firedrake) [l1057678@rdhpc-n1 tests]$ python3 test_compliance.py firedrake:WARNING OMP_NUM_THREADS is not set or is set to a value greater than 1, we suggest setting OMP_NUM_THREADS=1 to improve performance

DOFS: 6262

Volume for MMA is: 29.99999999999996 Value: 3.0000000000000036, Constraint 14.999999999999797 rho0: 315.40116948173716, rhoi: [0.00666667] Value: 7.872595385768886, Constraint 14.999999999999797 condition: fapp -8755.864348772528, new_fval 131.2644884640787 Recalculating rho ... It: 21, obj: 7.3906379652341245 g[0]: -0.0007441687698033217 kkt: 0.123831 change: 0.304477 rel obj change: 0.005939 Time per iteration: 0.4173157215118408 Optimization finished with change: 0.30448 and iterations: 21 Traceback (most recent call last): File "test_compliance.py", line 134, in test_compliance("L2", 7.420380654729631) File "test_compliance.py", line 126, in test_compliance assert np.allclose(final_cost_func, result, rtol=1e-5) NameError: name 'np' is not defined

Adding in the missing import numpy as np We then find:

firedrake) [l1057678@rdhpc-n1 tests]$ python3 test_compliance.py firedrake:WARNING OMP_NUM_THREADS is not set or is set to a value greater than 1, we suggest setting OMP_NUM_THREADS=1 to improve performance

DOFS: 6262

Volume for MMA is: 29.99999999999996 Value: 3.0000000000000036, Constraint 14.999999999999797 rho0: 315.40116948173716, rhoi: [0.00666667] Value: 7.872595385768886, Constraint 14.999999999999797 condition: fapp -8755.864348772528, new_fval 131.2644884640787 Recalculating rho … It: 21, obj: 7.3906379652341245 g[0]: -0.0007441687698033217 kkt: 0.123831 change: 0.304477 rel obj change: 0.005939 Time per iteration: 0.5018701553344727 Optimization finished with change: 0.30448 and iterations: 21 Traceback (most recent call last): File "test_compliance.py", line 134, in test_compliance("L2", 7.420380654729631) File "test_compliance.py", line 127, in test_compliance assert np.allclose(final_cost_func, result, rtol=1e-5) AssertionError

I then experimented with changing the tolerance from 1e-5 -> 1e-2

#assert np.allclose(final_cost_func, result, rtol=1e-5)

assert np.allclose(final_cost_func, result, rtol=1e-2)

(firedrake) [l1057678@rdhpc-n1 tests]$ python3 test_compliance.py

firedrake:WARNING OMP_NUM_THREADS is not set or is set to a value greater than 1, we suggest setting OMP_NUM_THREADS=1 to improve performance

DOFS: 6262

Volume for MMA is: 29.99999999999996 Value: 3.0000000000000036, Constraint 14.999999999999797 rho0: 315.40116948173716, rhoi: [0.00666667] Value: 7.872595385768886, Constraint 14.999999999999797 condition: fapp -8755.864348772528, new_fval 131.2644884640787 Recalculating rho … It: 21, obj: 7.3906379652341245 g[0]: -0.0007441687698033217 kkt: 0.123831 change: 0.304477 rel obj change: 0.005939 Time per iteration: 0.438732385635376 Optimization finished with change: 0.30448 and iterations: 21

salazardetroya commented 2 years ago

Yes, the tolerance might be very tight. This test is too general and was the first thing I came up with to test the overall algorithm. Maybe I should change it to test the change in a single iteration, where the divergence in the obtained values will be lower. The problem is very nonlinear and any little difference in the initial digits can lead to larger errors as the iterations progress.