Closed miguelriemoliveira closed 2 years ago
Implemented using the following strategy: before starting the optimization procedure call a prior optimization with all parameters and objective function the same, but withe the maximum iterations to 1.
Count the number of function calls to get the number of objective function calls per optimization iteration.
This information is passed onto the objective function in the data dictionary, key 'setup', which contains a dictionary like this:
self.data_models['status'] = {'is_iteration': False, 'num_iterations': 0, 'num_function_calls': 0,
'num_function_calls_per_iteration': None, }
Tested by printing in the objective function only when the is_iteration is true. Results are this:
Iteration Total nfev Cost Cost reduction Step norm Optimality
0 1 1.4967e+05 6.54e+05
Errors per sensor:
camera_2 24.857510083418767
camera_3 38.44974792075143
camera_4 0.9464414121276835
depth_camera_1 0.1163564076307508
lidar_1 0.15269560404876178
lidar_2 0.031541680536465035
lidar_3 0.08604138052442725
Collection 0 camera_2 has 25.94944022976728
Collection 0 camera_3 has nan
Collection 0 camera_4 has 0.517467937552238
Collection 0 depth_camera_1 has 0.11155734440920555
Collection 0 lidar_1 has 0.16693799780454174
Collection 0 lidar_2 has 0.03384938430815021
Collection 0 lidar_3 has 0.103798840936462
Collection 1 camera_2 has 23.787864225771244
Collection 1 camera_3 has 38.44974792075143
Collection 1 camera_4 has 1.4068031897208444
Collection 1 depth_camera_1 has 0.12169884718280402
Collection 1 lidar_1 has 0.1346339157983671
Collection 1 lidar_2 has 0.029288813290047632
Collection 1 lidar_3 has 0.06967292391762837
1 2 3.8180e+03 1.46e+05 1.27e+00 1.11e+05
Errors per sensor:
camera_2 9.170110934344107
camera_3 5.598572032758783
camera_4 0.48612529731443577
depth_camera_1 0.04274358945558603
lidar_1 0.01410315486603775
lidar_2 0.006420170376279852
lidar_3 0.009010069051147936
Collection 0 camera_2 has 9.294043939020403
Collection 0 camera_3 has nan
Collection 0 camera_4 has 0.36149993509087025
Collection 0 depth_camera_1 has 0.04053683184459292
Collection 0 lidar_1 has 0.014899420996272384
Collection 0 lidar_2 has 0.006601975184441967
Collection 0 lidar_3 has 0.00871258756913754
Collection 1 camera_2 has 9.048707174661203
Collection 1 camera_3 has 5.598572032758783
Collection 1 camera_4 has 0.619869588481189
Collection 1 depth_camera_1 has 0.04520020829941281
Collection 1 lidar_1 has 0.013093358980931743
Collection 1 lidar_2 has 0.006242685682371094
Collection 1 lidar_3 has 0.009284281276129909
2 3 2.6433e+02 3.55e+03 6.29e-01 2.60e+04
Changed the behavior of the setVisualizationFunction to now use the real core iterations as counter and not just the function calls.
def setVisualizationFunction(self, handle, always_visualize, niterations=0, figures=None):
""" Sets up the visualization function to be called to plot the data during the optimization procedure.
:param figures:
:param handle: handle to the function
:param always_visualize: call visualization function during optimization or just at the end
:param niterations: multiple of iterations at which the visualization function is called.
"""
Also tested on the larcc calibration. Everything seems ok.
The problem is that the objective function is called not only for updating the parameters (core iteration), but also for estimating the gradient (auxiliary iteration).
The goal is to find a way for the information of whether or not this is a core iteration, which would be useful for visualization purposes amongst others.
@FYI @tiagomfmadeira @danifpdra @eupedrosa