Background and Context
At the moment model calibration with negatives is done via default Scipy optimizer (BFSG), this is working as expected for small graphs, however when the size increases optimization takes much longer and requires much more memory, making it long and heavy to calibrate a model. This can be solved by switching optimizer to a gradient based one.
Description
The fix would be to either add an extra option to calibrate function e.g. gradient_optimizer=False adding a possibility when default optimization doesn't work but keeping functionality as it use to be. Another option would be to remove scipy optimization altogether and substitute it with gradient-based one.
Background and Context At the moment model calibration with negatives is done via default Scipy optimizer (BFSG), this is working as expected for small graphs, however when the size increases optimization takes much longer and requires much more memory, making it long and heavy to calibrate a model. This can be solved by switching optimizer to a gradient based one.
Description The fix would be to either add an extra option to calibrate function e.g.
gradient_optimizer=False
adding a possibility when default optimization doesn't work but keeping functionality as it use to be. Another option would be to remove scipy optimization altogether and substitute it with gradient-based one.