Closed odow closed 2 years ago
I think this answers our questions.
Python call has faster callbacks (106 seconds in Ipopt instead of 180), but it also has a much lower overhead of ~50 seconds compared with ~145 for the NL file. Some of the overhead is writing and reading the NL file and JuMP, but it isn't an additional 100 seconds, or we would notice with JuMP vs NL files using Ipopt_jll.
Total seconds in IPOPT = 106.723
EXIT: Optimal Solution Found.
model : t_proc (avg) t_wall (avg) n_eval
nlp_f | 22.99ms (270.48us) 3.17ms ( 37.25us) 85
nlp_g | 3.40 s ( 39.98ms) 480.38ms ( 5.65ms) 85
nlp_grad_f | 266.91ms ( 3.07ms) 36.94ms (424.61us) 87
nlp_hess_l | 10.56 s (125.74ms) 1.49 s ( 17.77ms) 84
nlp_jac_g | 8.89 s (103.37ms) 1.25 s ( 14.56ms) 86
total | 760.02 s (760.02 s) 106.73 s (106.73 s) 1
Summary
case........: /Users/oscar/Documents/lanl-ansi/rosetta-opf/variants/../data/pglib_opf_case10000_goc.m
variables...: 76804
constraints.: 112352
feasible....: true
cost........: 1354031
total time..: 153.1173529624939
data time.: 9.699323892593384
build time: 35.126672983169556
solve time: 108.29135513305664
Total seconds in IPOPT = 181.199
EXIT: Optimal Solution Found.
Summary
case........: /Users/oscar/Documents/lanl-ansi/rosetta-opf/variants/../data/pglib_opf_case10000_goc.m
variables...: 76804
constraints.: 112352
feasible....: true
cost........: 1354031
total time..: 325.1633548736572
data time.: 9.590620040893555
build time: 8.191709041595459
solve time: 307.3810248374939
Closes #21
Alternative to #24