The PR solves #587 by adding support for effect="constant".
Additionally, as discussed in the issue, it removes the need of autograd by implementing derivative by hand (😂) and by using scipy.optimize.minimize for optimization.
This leads to a few potential breaking changing in the API as the following argument are no longer needed:
n_iter
stepsize
check_grad
and the following attributes are no longer available:
losslog
self.wtslog
derivlog
Type of change
[X] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[X] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist:
[ ] My code follows the style guidelines (flake8)
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation (also to the readme.md)
[X] I have added tests that prove my fix is effective or that my feature works
[ ] I have added tests to check whether the new feature adheres to the sklearn convention
[X] New and existing unit tests pass locally with my changes
If you feel your PR is ready for a review, ping @koaning or @mbrouns.
Description
The PR solves #587 by adding support for
effect="constant"
.Additionally, as discussed in the issue, it removes the need of autograd by implementing derivative by hand (😂) and by using scipy.optimize.minimize for optimization.
This leads to a few potential breaking changing in the API as the following argument are no longer needed:
and the following attributes are no longer available:
Type of change
Checklist:
If you feel your PR is ready for a review, ping @koaning or @mbrouns.