shankarpandala / lazypredict

Lazy Predict help build a lot of basic models without much code and helps understand which models works better without any parameter tuning
MIT License
2.86k stars 329 forks source link

Update optuna to 2.4.0 #310

Closed pyup-bot closed 3 years ago

pyup-bot commented 3 years ago

This PR updates optuna from 1.5.0 to 2.4.0.

Changelog ### 2.4.0 ``` This is the release note of [v2.4.0](https://github.com/optuna/optuna/milestone/30?closed=1). Highlights Python 3.9 Support This is the first version to officially support Python 3.9. Everything is tested with the exception of certain integration modules under `optuna.integration`. We will continue to extend the support in the coming releases. Multi-objective Optimization Multi-objective optimization in Optuna is now a stable first-class citizen. Multi-objective optimization allows optimizing multi objectives at the same time such as maximizing model accuracy while minimizing model inference time. Single-objective optimization can be extended to multi-objective optimization by 1. specifying a sequence (e.g. a tuple) of `directions` instead of a single `direction` in `optuna.create_study`. Both parameters are supported for backwards compatibility 1. (optionally) specifying a sampler that supports multi-objective optimization in `optuna.create_study`. If skipped, will default to the `NSGAIISampler` 1. returning a sequence of values instead of a single value from the objective function Multi-objective Sampler Samplers that support multi-objective optimization are currently the `NSGAIISampler`, the `MOTPESampler`, the `BoTorchSampler` and the `RandomSampler`. Example python import optuna def objective(trial): The Binh and Korn function. It has two objectives to minimize. x = trial.suggest_float("x", 0, 5) y = trial.suggest_float("y", 0, 3) v0 = 4 * x ** 2 + 4 * y ** 2 v1 = (x - 5) ** 2 + (y - 5) ** 2 return v0, v1 sampler = optuna.samplers.NSGAIISampler() study = optuna.create_study(directions=["minimize", "minimize"], sampler=sampler) study.optimize(objective, n_trials=100) Get a list of the best trials. best_trials = study.best_trials Visualize the best trials (i.e. Pareto front) in blue. fig = optuna.visualization.plot_pareto_front(study, target_names=["v0", "v1"]) fig.show() ![v240_pareto_front](https://user-images.githubusercontent.com/5983694/104276451-3992cd00-54e8-11eb-8489-5480faaaefe0.png) Migrating from the Experimental `optuna.multi_objective` `optuna.multi_objective`, used to be an experimental submodule for multi-objective optimization. This submodule is now deprecated. Changes required to migrate to the new interfaces are subtle as described by the steps in the previous section. Database Storage Schema Upgrade With the introduction of multi-objective optimization, the database storage schema for the `RDBStorage` has been changed. To continue to use databases from v2.3, run the following command to upgrade your tables. Please create a backup of the database before. bash optuna storage upgrade --storage <URL to the storage, e.g. sqlite:///example.db> BoTorch Sampler `BoTorchSampler` is an experimental sampler based on BoTorch. BoTorch is a library for Bayesian optimization using PyTorch. See [example](https://github.com/optuna/optuna/blob/release-v2.4.0/examples/botorch_simple.py) for an example usage. Constrained Optimization For the first time in Optuna, `BoTorchSampler` allows constrained optimization. Users can impose constraints on hyperparameters or objective function values as follows. python import optuna def objective(trial): x = trial.suggest_float("x", -15, 30) y = trial.suggest_float("y", -15, 30) Constraints which are considered feasible if less than or equal to zero. The feasible region is basically the intersection of a circle centered at (x=5, y=0) and the complement to a circle centered at (x=8, y=-3). c0 = (x - 5) ** 2 + y ** 2 - 25 c1 = -((x - 8) ** 2) - (y + 3) ** 2 + 7.7 Store the constraints as user attributes so that they can be restored after optimization. trial.set_user_attr("constraint", (c0, c1)) return x ** 2 + y ** 2 def constraints(trial): return trial.user_attrs["constraint"] Specify the constraint function when instantiating the `BoTorchSampler`. sampler = optuna.integration.BoTorchSampler(constraints_func=constraints) study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=32) Multi-objective Optimization `BoTorchSampler` supports both single- and multi-objective optimization. By default, the sampler selects the appropriate sampling algorithm with respect to the number of objectives. Customizability `BoTorchSampler` is customizable via the `candidates_func` callback parameter. Users familiar with BoTorch can change the surrogate model, acquisition function, and its optimizer in this callback to utilize any of the algorithms provided by BoTorch. Visualization with Callback Specified Target Values Visualization functions can now plot values other than objective values, such as inference time or evaluation by other metrics. Users can specify the values to be plotted by specifying the `target` argument. Even in multi-objective optimization, visualization functions can be available with the `target` argument along a specific objective. New Tutorials [The tutorial](https://optuna.readthedocs.io/en/v2.4.0/tutorial/index.html) has been improved and new content for each Optuna’s key feature have been added. More contents will be added in the future. Please look forward to it! Breaking Changes - Allow filtering trials from `Study` and `BaseStorage` based on `TrialState` (1943) - Stop storing error stack traces in `fail_reason` in trial `system_attr` (1964) - Importance with target values other than objective value (2109) New Features - Implement `plot_contour` and `_get_contour_plot` with Matplotlib backend (1782, thanks ytknzw!) - Implement `plot_param_importances` and `_get_param_importance_plot` with Matplotlib backend (1787, thanks ytknzw!) - Implement `plot_slice` and `_get_slice_plot` with Matplotlib backend (1823, thanks ytknzw!) - Add `PartialFixedSampler` (1892, thanks norihitoishida!) - Allow filtering trials from `Study` and `BaseStorage` based on `TrialState` (1943) - Add rung promotion limitation in ASHA/Hyperband to enable arbitrary unknown length runs (1945, thanks alexrobomind!) - Add Fastai V2 pruner callback (1954, thanks hal-314!) - Support options available on AllenNLP except to `node_rank` and `dry_run` (1959) - Universal data transformer (1987) - Introduce `BoTorchSampler` (1989) - Add axis order for `plot_pareto_front` (2000, thanks okdshin!) - `plot_optimization_history` with target values other than objective value (2064) - `plot_contour` with target values other than objective value (2075) - `plot_parallel_coordinate` with target values other than objective value (2089) - `plot_slice` with target values other than objective value (2093) - `plot_edf` with target values other than objective value (2103) - Importance with target values other than objective value (2109) - Migrate `optuna.multi_objective.visualization.plot_pareto_front` (2110) - Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_contour` (2112) - Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_edf` (2117) - Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_optimization_history` (2118) - `plot_param_importances` with target values other than objective value (2119) - Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_parallel_coordinate` (2120) - Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_slice` (2121) - Trial post processing (2134) - Raise `NotImplementedError` for `trial.report` and `trial.should_prune` during multi-objective optimization (2135) - Raise `ValueError` in TPE and CMA-ES if `study` is being used for multi-objective optimization (2136) - Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `get_param_importances`, `BaseImportanceEvaluator.evaluate`, and `plot_param_importances` (2137) - Raise `ValueError` in integration samplers if `study` is being used for multi-objective optimization (2145) - Migrate NSGA2 sampler (2150) - Migrate MOTPE sampler (2167) - Storages to query trial IDs from numbers (2168) Enhancements - Use context manager to treat session correctly (1628) - Integrate multi-objective optimization module for the storages, study, and frozen trial (1994) - Pass `include_package` to AllenNLP for distributed setting (2018) - Change the RDB schema for multi-objective integration (2030) - Update pruning callback for xgboost 1.3 (2078, thanks trivialfis!) - Fix log format for single objective optimization to include best trial (2128) - Implement `Study._is_multi_objective()` to check whether study has multiple objectives (2142, thanks nyanhi!) - `TFKerasPruningCallback` to warn when an evaluation metric does not exist (2156, thanks bigbird555!) - Warn default target name when target is specified (2170) - `Study.trials_dataframe` for multi-objective optimization (2181) Bug Fixes - Make always compute `weights_below` in `MOTPEMultiObjectiveSampler` (1979) - Fix the range of categorical values (1983) - Remove circular reference of study (2079) - Fix flipped colormap in `matplotlib` backend `plot_parallel_coordinate` (2090) - Replace builtin `isnumerical` to capture float values in `plot_contour` (2096, thanks nzw0301!) - Drop unnecessary constraint from upgraded `trial_values` table (2180) Installation - Ignore `tests` directory on install (2015, thanks 130ndim!) - Clean up `setup.py` requirements (2051) - Pin `xgboost<1.3` (2084) - Bump up PyTorch version (2094) Documentation - Update tutorial (1722) - Introduce plotly directive (1944, thanks harupy!) - Check everything by `blackdoc` (1982) - Remove `codecov` from `CONTRIBUTING.md` (2005) - Make the visualization examples deterministic (2022, thanks harupy!) - Use plotly directive in `plot_pareto_front` (2025) - Remove plotly scripts and unused generated files (2026) - Add mandarin link to ReadTheDocs layout (2028) - Document about possible duplicate parameter configurations in `GridSampler` (2040) - Fix `MOTPEMultiObjectiveSampler`'s example (2045, thanks norihitoishida!) - Fix Read the Docs build failure caused by `pip install --find-links` (2065) - Fix `lt` symbol (2068, thanks KoyamaSohei!) - Fix parameter section of `RandomSampler` in docs (2071, thanks akihironitta!) - Add note on the behavior of `suggest_float` with `step` argument (2087) - Tune build time of 2076 (2088) - Add `matplotlib.plot_parallel_coordinate` example (2097, thanks nzw0301!) - Add `matplotlib.plot_param_importances` example (2098, thanks nzw0301!) - Add `matplotlib.plot_slice` example (2099, thanks nzw0301!) - Add `matplotlib.plot_contour` example (2100, thanks nzw0301!) - Bump Sphinx up to 3.4.0 (2127) - Additional docs about `optuna.multi_objective` deprecation (2132) - Move type hints to description from signature (2147) - Add copy button to all the code examples (2148) - Fix wrong wording in distributed execution tutorial (2152) Examples - Add MXNet Gluon example (1985) - Update logging in PyTorch Lightning example (2037, thanks pbmstrk!) - Change return type of `training_step` of PyTorch Lightning example (2043) - Fix dead links in `examples/README.md` (2056, thanks nai62!) - Add `enqueue_trial` example (2059) - Skip FastAI v2 example in examples job (2108) - Move `examples/multi_objective/plot_pareto_front.py` to `examples/visualization/plot_pareto_front.py` (2122) - Use latest multi-objective functionality in multi-objective example (2123) - Add haiku and jax simple example (2155, thanks nzw0301!) Tests - Update `parametrize_sampler` of `test_samplers.py` (2020, thanks norihitoishida!) - Change `trail_id + 123` -> `trial_id` (2052) - Fix `scipy==1.6.0` test failure with `LogisticRegression` (2166) Code Fixes - Introduce plotly directive (1944, thanks harupy!) - Stop storing error stack traces in `fail_reason` in trial `system_attr` (1964) - Check everything by blackdoc (1982) - HPI with `_SearchSpaceTransform` (1988) - Fix TODO comment about orders of `dict`s (2007) - Add `__all__` to reexport modules explicitly (2013) - Update `CmaEsSampler`'s warning message (2019, thanks norihitoishida!) - Put up an alias for `structs.StudySummary` against `study.StudySummary` (2029) - Deprecate `optuna.type_checking` module (2032) - Remove `py35` from black config in `pyproject.toml` (2035) - Use model methods instead of `session.query()` (2060) - Use `find_or_raise_by_id` instead of `find_by_id` to raise if a study does not exist (2061) - Organize and remove unused model methods (2062) - Leave a comment about RTD compromise (2066) - Fix ideographic space (2067, thanks KoyamaSohei!) - Make new visualization parameters keyword only (2082) - Use latest APIs in `LightGBMTuner` (2083) - Add `matplotlib.plot_slice` example (2099, thanks nzw0301!) - Deprecate previous multi-objective module (2124) - `_run_trial` refactoring (2133) - Cosmetic fix of `xgboost` integration (2143) Continuous Integration - Partial support of python 3.9 (1908) - Check everything by blackdoc (1982) - Avoid `set-env` in GitHub Actions (1992) - PyTorch and AllenNLP (1998) - Remove `checks` from circleci (2004) - Migrate tests and coverage to GitHub Actions (2027) - Enable blackdoc `--diff` option (2031) - Unpin mypy version (2069) - Skip FastAI v2 example in examples job (2108) - Fix CI examples for Py3.6 (2129) Other - Add `tox.ini` (2024) - Allow passing additional arguments when running tox (2054, thanks harupy!) - Add Python 3.9 to README badge (2063) - Clarify that generally pull requests need two or more approvals (2104) - Release wheel package via PyPI (2105) - Adds news entry about the Python 3.9 support (2114) - Add description for tox to `CONTRIBUTING.md` (2159) - Bump up version number to 2.4.0 (2183) - [Backport] Fix the syntax of `pypi-publish.yml` (2188) Thanks to All the Contributors! This release was made possible by authors, and everyone who participated in reviews and discussions. 130ndim, Crissman, HideakiImamura, KoyamaSohei, akihironitta, alexrobomind, bigbird555, c-bata, crcrpar, eytan, g-votte, hal-314, harupy, himkt, hvy, keisuke-umezawa, nai62, norihitoishida, not522, nyanhi, nzw0301, okdshin, pbmstrk, sdaulton, sile, toshihikoyanase, trivialfis, ytknzw, ytsmiling ``` ### 2.3.0 ``` This is the release note of [v2.3.0](https://github.com/optuna/optuna/milestone/29?closed=1). Highlights Multi-objective TPE sampler TPE sampler now supports multi-objective optimization. This new algorithm is implemented in `optuna.multi_objective` and used via`optuna.multi_objective.samplers.MOTPEMultiObjectiveSampler`. See 1530 for the details. ![87849998-c7ba3c00-c927-11ea-8d5b-c7712f77abbe](https://user-images.githubusercontent.com/38826298/98068220-cdb83680-1e9e-11eb-9c6c-90a5a2859804.gif) `LightGBMTunerCV` returns the best booster The best booster of `LightGBMTunerCV` can now be obtained in the same way as the `LightGBMTuner`. See 1609 and 1702 for details. PyTorch Lightning v1.0 support The integration with PyTorch Lightning v1.0 is available. The pruning feature of Optuna can be used with the new version of PyTorch Lightning using `optuna.integration.PyTorchLightningPruningCallback`. See 597 and 1926 for details. RAPIDS + Optuna example An example to illustrate how to use [RAPIDS](https://rapids.ai/) with Optuna is available. You can use this example to harness the computational power of the GPU along with Optuna. New Features - Introduce Multi-objective TPE to `optuna.multi_objective.samplers` (1530, thanks y0z!) - Return `LGBMTunerCV` booster (1702, thanks nyanhi!) - Implement `plot_intermediate_values` and `_get_intermediate_plot` with Matplotlib backend (1762, thanks ytknzw!) - Implement `plot_optimization_history` and `_get_optimization_history_plot` with Matplotlib backend (1763, thanks ytknzw!) - Implement `plot_parallel_coordinate` and `_get_parallel_coordinate_plot` with Matplotlib backend (1764, thanks ytknzw!) - Improve MLflow callback functionality: allow nesting, and attached study attrs (1918, thanks drobison00!) Enhancements - Copy datasets before objective evaluation (1805) - Fix 'Mean of empty slice' warning (1927, thanks carefree0910!) - Add `reseed_rng` to `NSGAIIMultiObjectiveSampler` (1938) - Add RDB support to `MoTPEMultiObjectiveSampler` (1978) Bug Fixes - Add some jitters in `_MultivariateParzenEstimators` (1923, thanks kstoneriv3!) - Fix `plot_contour` (1929, thanks carefree0910!) - Fix return type of the multivariate TPE samplers (1955, thanks nzw0301!) - Fix `StudyDirection` of `mape` in `LightGBMTuner` (1966) Documentation - Add explanation for most module-level reference pages (1850, thanks tktran!) - Revert module directives (1873) - Remove `with_trace` method from docs (1882, thanks i-am-jeetu!) - Add CuPy to projects using Optuna (1889) - Add more sphinx doc comments (1894, thanks yuk1ty!) - Fix a broken link in `matplotlib.plot_edf` (1899) - Fix broken links in `README.md` (1901) - Show module paths in `optuna.visualization` and `optuna.multi_objective.visualization` (1902) - Add a short description to the example in FAQ (1903) - Embed `plot_edf` figure in documentation by using matplotlib plot directive (1905, thanks harupy!) - Fix plotly figure iframe paths (1906, thanks harupy!) - Update docstring of `CmaEsSampler` (1909) - Add `matplotlib.plot_intermediate_values` figure to doc (1933, thanks harupy!) - Add `matplotlib.plot_optimization_history` figure to doc (1934, thanks harupy!) - Make code example of `MOTPEMultiObjectiveSampler` executable (1953) - Add `Raises` comments to samplers (1965, thanks yuk1ty!) Examples - Make src comments more descriptive in `examples/pytorch_lightning_simple.py` (1878, thanks iamshnoo!) - Add an external project in Optuna examples (1888, thanks resnant!) - Add RAPIDS + Optuna simple example (1924, thanks Nanthini10!) - Apply follow-up of 1924 (1960) Tests - Fix RDB test to avoid deadlock when creating study (1919) - Add a test to verify `nest_trials` for `MLflowCallback` works properly (1932, thanks harupy!) - Add a test to verify `tag_study_user_attrs` for `MLflowCallback` works properly (1935, thanks harupy!) Code Fixes - Fix typo (1900) - Refactor `Study.optimize` (1904) - Refactor `Study.trials_dataframe` (1907) - Add variable annotation to `optuna/logging.py` (1920, thanks akihironitta!) - Fix duplicate stack traces (1921, thanks akihironitta!) - Remove `_log_normal_cdf` (1922, thanks kstoneriv3!) - Convert comment style type hints (1950, thanks akihironitta!) - Align the usage of type hints and instantiation of dictionaries (1956, thanks akihironitta!) Continuous Integration - Run documentation build and doctest in GitHub Actions (1891) - Resolve conflict of `job-id` of GitHub Actions workflows (1898) - Pin `mypy==0.782` (1913) - Run `allennlp_jsonnet.py` on GitHub Actions (1915) - Fix for PyTorch Lightning 1.0 (1926) - Check blackdoc in CI (1958) - Fix path for `store_artifacts` step in `document` CircleCI job (1962, thanks harupy!) Other - Fix how to check the format, coding style, and type hints (1755) - Fix typo (1968, thanks nzw0301!) Thanks to All the Contributors! This release was made possible by authors, and everyone who participated in reviews and discussions. Crissman, HideakiImamura, Nanthini10, akihironitta, c-bata, carefree0910, crcrpar, drobison00, harupy, himkt, hvy, i-am-jeetu, iamshnoo, keisuke-umezawa, kstoneriv3, nyanhi, nzw0301, resnant, sile, smly, tktran, toshihikoyanase, y0z, ytknzw, yuk1ty ``` ### 2.2.0 ``` This is the release note of [v2.2.0](https://github.com/optuna/optuna/milestone/28?closed=1). In this release, we drop support for Python 3.5. If you are using Python 3.5, please consider upgrading your Python environment to Python 3.6 or newer, or install older versions of Optuna. Highlights Multivariate TPE sampler `TPESampler` is updated with an experimental option to enable multivariate sampling. This algorithm captures dependencies among hyperparameters better than the previous algorithm. See 1767 for more details. <img src="https://user-images.githubusercontent.com/3255979/95030825-40da5b80-06ed-11eb-84b1-fcc24dc1b70a.gif" width="480"> <!-- ![density_ratio](https://user-images.githubusercontent.com/3255979/95030825-40da5b80-06ed-11eb-84b1-fcc24dc1b70a.gif) --> <img src="https://user-images.githubusercontent.com/3255979/95030841-58b1df80-06ed-11eb-8e3c-a74e3687c78f.png" width="480"> <!-- ![92350529-3f306e80-f114-11ea-8782-36e463c19320](https://user-images.githubusercontent.com/3255979/95030841-58b1df80-06ed-11eb-8e3c-a74e3687c78f.png) --> Improved AllenNLP support `AllenNLPExecutor` supports pruning. It is introduced in the official [hyperparameter search guide](https://guide.allennlp.org/hyperparameter-optimization) by AllenNLP. Both `AllenNLPExecutor` and the guide were written by himkt. See #1772. ![allennlp-executor-jsonnet4](https://user-images.githubusercontent.com/3255979/95030850-66676500-06ed-11eb-9f95-1f8862510c8e.png) New Features - Create `optuna.visualization.matplotlib` (1756, thanks ytknzw!) - Add multivariate TPE sampler (1767, thanks kstoneriv3!) - Support `AllenNLPPruningCallback` for `AllenNLPExecutor` (1772) Enhancements - `KerasPruningCallback` to warn when an evaluation metric does not exist (1759, thanks bigbird555!) - Implement `plot_edf` and `_get_edf_plot` with Matplotlib backend (1760, thanks ytknzw!) - Fix exception chaining all over the codebase (1781, thanks akihironitta!) - Add metric alias of rmse for `LightGBMTuner` (1807, thanks upura!) - Update PyTorch-Lighting minor version (1813, thanks nzw0301!) - Improve `TensorBoardCallback` (1814, thanks sfujiwara!) - Add metric alias for `LightGBMTuner` (1822, thanks nyanhi!) - Introduce a new argument to plot all evaluation points by `optuna.multi_objective.visualization.plot_pareto_front` (1824, thanks nzw0301!) - Add `reseed_rng` to `RandomMultiobjectiveSampler` (1831, thanks y0z!) Bug Fixes - Fix fANOVA for `IntLogUniformDistribution` (1788) - Fix `mypy` in an environment where some dependencies are installed (1804) - Fix `WFG._compute()` (1812, thanks y0z!) - Fix contour plot error for categorical distributions (1819, thanks zchenry!) - Store CMAES optimizer after splitting into substrings (1833) - Add maximize support on `CmaEsSampler` (1849) - Add `matplotlib` directory to `optuna.visualization.__init__.py` (1867) Installation - Update `setup.py` to drop Python 3.5 support (1818, thanks harupy!) - Add Matplotlib to `setup.py` (1829, thanks ytknzw!) Documentation - Fix `plot_pareto_front` preview path (1808) - Fix indents of the example of `multi_objective.visualization.plot_pareto_front` (1815, thanks nzw0301!) - Hide `__init__` from docs (1820, thanks upura!) - Explicitly omit Python 3.5 from `README.md` (1825) - Follow-up 1832: alphabetical naming and fixes (1841) - Mention `isort` in the contribution guidelines (1842) - Add news sections about introduction of `isort` (1843) - Add `visualization.matpltlib` to docs (1847) - Add sphinx doc comments regarding exceptions in the optimize method (1857, thanks yuk1ty!) - Avoid global study in `Study.stop` testcode (1861) - Fix documents of `visualization.is_available` (1869) - Improve `ThresholdPruner` example (1876, thanks fsmosca!) - Add logging levels to `optuna.logging.set_verbosity` (1884, thanks nzw0301!) Examples - Add XGBoost cross-validation example (1836, thanks sskarkhanis!) - Minor code fix of XGBoost examples (1844) Code Fixes - Add default implementation of `get_n_trials` (1568) - Introduce `isort` to automatically sort import statements (1695, thanks harupy!) - Avoid using experimental decorator on `CmaEsSampler` (1777) - Remove `logger` member attributes from `PyCmaSampler` and `CmaEsSampler` (1784) - Apply `blackdoc` (1817) - Remove TODO (1821, thanks sfujiwara!) - Fix Redis example code (1826) - Apply `isort` to `visualization/matplotlib/` and `multi_objective/visualization` (1830) - Move away from `.scoring` imports (1864, thanks norihitoishida!) - Add experimental decorator to `matplotlib.*` (1868) Continuous Integration - Disable `--cache-from` if trigger of docker image build is `release` (1791) - Remove Python 3.5 from CI checks (1810, thanks harupy!) - Update python version in docs (1816, thanks harupy!) - Migrate `checks` to GitHub Actions (1838) - Add option `--diff` to black (1840) Thanks to All the Contributors! This release was made possible by authors, and everyone who participated in reviews and discussions. HideakiImamura, akihironitta, bigbird555, c-bata, crcrpar, fsmosca, g-votte, harupy, himkt, hvy, keisuke-umezawa, kstoneriv3, norihitoishida, nyanhi, nzw0301, sfujiwara, sile, sskarkhanis, toshihikoyanase, upura, y0z, ytknzw, yuk1ty, zchenry ``` ### 2.1.0 ``` This is the release note of [v2.1.0](https://github.com/optuna/optuna/milestone/27?closed=1). *Optuna v2.1.0 will be the last version to support Python 3.5. See 1067.* Highlights Allowing `objective(study.best_trial)` `FrozenTrial` used to subclass `object` but now implements `BaseTrial`. It can be used in places where a `Trial` is expected, including user-defined objective functions. Re-evaluating the objective functions with the best parameter configuration is now straight forward. See 1503 for more details. python study.optimize(objective, n_trials=100) best_trial = study.best_trial best_value = objective(best_trial) Did not work prior to v2.1.0. IPOP-CMA-ES Sampling Algorithm `CmaEsSampler` comes with an experimental option to switch to IPOP-CMA-ES. This algorithm restarts the strategy with an increased population size after premature convergence, allowing a more explorative search. See 1548 for more details. ![image](https://user-images.githubusercontent.com/38826298/92208970-2aab6680-eec7-11ea-863e-a30e18a79f25.png) *Comparing the new option with the previous `CmaEsSampler` and `RandomSampler`.* Optuna & MLFlow on Kubernetes Example Optuna can be easily integrated with MLFlow on Kubernetes clusters. The example contained [here](https://github.com/optuna/optuna/tree/master/examples/kubernetes) is a great introduction to get you started with a few lines of commands. See #1464 for more details. Providing Type Hinting to Applications Type hint information is packaged following [PEP 561](https://www.python.org/dev/peps/pep-0561/). Users of Optuna can now run style checkers against the framework. Note that the applications which ignore missing imports may raise new type-check errors due to this change. See #1720 for more details. Breaking Changes Configuration files for `AllenNLPExecutor` may need to be updated. See 1544 for more details. - Remove `allennlp.common.params.infer_and_cast` from AllenNLP integrations (1544) - Deprecate `optuna.integration.KerasPruningCallback` (1670, thanks VamshiTeja!) - Make Optuna PEP 561 Compliant (1720, thanks MarioIshac!) New Features - Add sampling functions to `FrozenTrial` (1503, thanks nzw0301!) - Add modules to compute hypervolume (1537) - Add IPOP-CMA-ES support in `CmaEsSampler` (1548) - Implement skorch pruning callback (1668) Enhancements - Make sampling from trunc-norm efficient in `TPESampler` (1562) - Add trials to cache when awaking `WAITING` trials in `_CachedStorage` (1570) - Add log in `create_new_study` method of storage classes (1629, thanks tohmae!) - Add border to markers in contour plot (1691, thanks zchenry!) - Implement hypervolume calculator for two-dimensional space (1771) Bug Fixes - Avoid to sample the value which equals to upper bound (1558) - Exit thread after session is destroyed (1676, thanks KoyamaSohei!) - Disable `feature_pre_filter` in `LightGBMTuner` (1774) - Fix fANOVA for `IntLogUniformDistribution` (1790) Installation - Add `packaging` in `install_requires` (1551) - Fix failure of Keras integration due to TF2.3 (1563) - Install `fsspec<0.8.0` for Python 3.5 (1596) - Specify the version of `packaging` to `>= 20.0` (1599, thanks Isa-rentacs!) - Install `lightgbm<3.0.0` to circumvent error with `feature_pre_filter` (1773) Documentation - Fix link to the definition of `StudySummary` (1533, thanks nzw0301!) - Update log format in docs (1538) - Integrate Sphinx Gallery to make tutorials easily downloadable (1543) - Add AllenNLP pruner to list of pruners in tutorial (1545) - Refine the help of `study-name` (1565, thanks belldandyxtq!) - Simplify contribution guidelines by removing rule about PR title naming (1567) - Remove license section from `README.md` (1573) - Update key features (1582) - Simplify documentation of `BaseDistribution.single` (1593) - Add navigation links for contributors to `README.md` (1597) - Apply minor changes to `CONTRIBUTING.md` (1601) - Add list of projects using Optuna to `examples/README.md` (1605) - Add a news section to `README.md` (1606) - Avoid the latest stable `sphinx` (1613) - Add link to examples in tutorial (1625) - Add the description of default pruner (`MedianPruner`) to the documentation (1657, thanks Chillee!) - Remove generated directories with `make clean` (1658) - Delete a useless auto generated directory (1708) - Divide a section for each integration repository (1709) - Add example to `optuna.study.create_study` (1711, thanks Ruketa!) - Add example to `optuna.study.load_study` (1712, thanks bigbird555!) - Fix broken doctest example code (1713) - Add some notes and usage example for the hypervolume computing module (1715) - Fix issue where doctests are not executed (1723, thanks harupy!) - Add example to `optuna.study.Study.optimize` (1726, thanks norihitoishida!) - Add target for doctest to `Makefile` (1732, thanks harupy!) - Add example to `optuna.study.delete_study` (1741, thanks norihitoishida!) - Add example to `optuna.study.get_all_study_summaries` (1742, thanks norihitoishida!) - Add example to `optuna.study.Study.set_user_attr` (1744, thanks norihitoishida!) - Add example to `optuna.study.Study.user_attrs` (1745, thanks norihitoishida!) - Add example to `optuna.study.Study.get_trials` (1746, thanks norihitoishida!) - Add example to `optuna.multi_objective.study.MultiObjectiveStudy.optimize` (1747, thanks norihitoishida!) - Add explanation for `optuna.trial` (1748) - Add example to `optuna.multi_objective.study.create_study` (1749, thanks norihitoishida!) - Add example to `optuna.multi_objective.study.load_study` (1750, thanks norihitoishida!) - Add example to `optuna.study.Study.stop` (1752, thanks Ruketa!) - Re-generate contour plot example with padding (1758) Examples - Add an example of Kubernetes, PyTorchLightning, and MLflow (1464) - Create study before multiple workers are launched in Kubernetes MLflow example (1536) - Fix typo in `examples/kubernetes/mlflow/README.md` (1540) - Reduce search space for AllenNLP example (1542) - Introduce `plot_param_importances` in example (1555) - Removing references to deprecated `optuna study optimize` commands from examples (1566, thanks ritvik1512!) - Add scripts to run `examples/kubernetes/*` (1584, thanks VamshiTeja!) - Update Kubernetes example of "simple" to avoid potential errors (1600, thanks Nishikoh!) - Implement `skorch` pruning callback (1668) - Add a `tf.keras` example (1681, thanks sfujiwara!) - Update `examples/pytorch_simple.py` (1725, thanks wangxin0716!) - Fix Binh and Korn function in MO example (1757) Tests - Test `_CachedStorage` in `test_study.py` (1575) - Rename `tests/multi_objective` as `tests/multi_objective_tests` (1586) - Do not use deprecated `pytorch_lightning.data_loader` decorator (1667) - Add test for hypervolume computation for solution sets with duplicate points (1731) Code Fixes - Match the order of definition in `trial` (1528, thanks nzw0301!) - Add type hints to storage (1556) - Add trials to cache when awaking `WAITING` trials in `_CachedStorage` (1570) - Use `packaging` to check the library version (1610, thanks VamshiTeja!) - Fix import order of `packaging.version` (1623) - Refactor TPE's `sample_from_categorical_dist` (1630) - Fix error messages in `TPESampler` (1631, thanks kstoneriv3!) - Add code comment about `n_ei_candidates` for categorical parameters (1637) - Add type hints into `optuna/integration/keras.py` (1642, thanks airyou!) - Fix how to use `black` in `CONTRIBUTING.md` (1646) 1- Add type hints into `optuna/cli.py` (1648, thanks airyou!) - Add type hints into `optuna/dashboard.py`, `optuna/integration/__init__.py` (1653, thanks airyou!) - Add type hints `optuna/integration/_lightgbm_tuner` (1655, thanks upura!) - Fix LightGBM Tuner import code (1659) - Add type hints to `optuna/storages/__init__.py` (1661, thanks akihironitta!) - Add type hints to `optuna/trial` (1662, thanks upura!) - Enable flake8 E231 (1663, thanks harupy!) - Add type hints to `optuna/testing` (1665, thanks upura!) - Add type hints to `tests/storages_tests/rdb_tests` (1666, thanks akihironitta!) - Add type hints to `optuna/samplers` (1673, thanks akihironitta!) - Fix type hint of `optuna.samplers._random` (1678, thanks nyanhi!) - Add type hints into `optuna/integration/mxnet.py` (1679, thanks norihitoishida!) - Fix type hint of `optuna/pruners/_nop.py` (1680, thanks Ruketa!) - Update Type Hints: `prunes/_percentile.py` and `prunes/_median.py` (1682, thanks ytknzw!) - Fix incorrect type annotations for `args` and `kwargs` (1684, thanks harupy!) - Update type hints in `optuna/pruners/_base.py` and `optuna/pruners/_successive_halving.py` (1685, thanks ytknzw!) - Add type hints to `test_optimization_history.py` (1686, thanks yosupo06!) - Fix type hint of `tests/pruners_tests/test_median.py` (1687, thanks polyomino-24!) - Type hint and reformat of files under `visualization_tests` (1689, thanks gasin!) - Remove unused argument `trial` from `optuna.samplers._tpe.sampler._get_observation_pairs` (1692, thanks ytknzw!) - Add type hints into `optuna/integration/chainer.py` (1693, thanks norihitoishida!) - Add type hints to `optuna/integration/tensorflow.py` (1698, thanks uenoku!) - Add type hints into `optuna/integration/chainermn.py` (1699, thanks norihitoishida!) - Add type hints to `optuna/integration/xgboost.py` (1700, thanks Ruketa!) - Add type hints to files under `tests/integration_tests` (1701, thanks gasin!) - Use `Optional` for keyword arguments that default to `None` (1703, thanks harupy!) - Fix type hint of all the rest files under `tests/` (1704, thanks gasin!) - Fix type hint of `optuna/integration` (1705, thanks akihironitta!) - Add l2 metric aliases to `LightGBMTuner` (1717, thanks thigm85!) - Convert type comments in `optuna/study.py` into type annotations (1724, thanks harupy!) - Apply `black==20.8b1` (1730) - Fix type hint of `optuna/integration/sklearn.py` (1735, thanks akihironitta!) - Add type hints into `optuna/structs.py` (1743, thanks norihitoishida!) - Fix typo in `optuna/samplers/_tpe/parzen_estimator.py` (1754, thanks akihironitta!) Continuous Integration - Temporarily skip `allennlp_jsonnet.py` example in CI (1527) - Run TensorFlow on Python 3.8 (1564) - Bump PyTorch to 1.6 (1572) - Skip entire `allennlp` example directory in CI (1585) - Use `actions/setup-pythonv2` (1594) - Add `cache` to GitHub Actions Workflows (1595) - Run example after docker build to ensure that built image is setup properly (1635, thanks harupy!) - Use cache-from to build docker image faster (1638, thanks harupy!) - Fix issue where doctests are not executed (1723, thanks harupy!) Other - Remove Swig installation from Dockerfile (1462) - Add: How to run examples with our Docker images (1554) - GitHub Action labeler (1591) - Do not trigger labeler on push (1624) - Fix invalid YAML syntax (1626) - Pin `sphinx` version to `3.0.4` (1627, thanks harupy!) - Add `.dockerignore` (1633, thanks harupy!) - Fix how to use `black` in `CONTRIBUTING.md` (1646) - Add `pyproject.toml` for easier use of black (1649) - Fix `docs/Makefile` (1650) - Ignore vscode configs (1660) - Make Optuna PEP 561 Compliant (1720, thanks MarioIshac!) ``` ### 2.0.0 ``` This is the release note of [v2.0.0](https://github.com/optuna/optuna/milestone/26?closed=1). Highlights The second major version of Optuna 2.0 is released. It accommodates a multitude of new features, including Hyperband pruning, hyperparameter importance, built-in CMA-ES support, grid sampler, and LightGBM integration. Storage access is also improved, significantly speeding up optimization. Documentation has been revised and navigation is made easier. See the [blog](https://medium.com/vincent_44453/optuna-v2-3165e3f1fc2) for details. Hyperband Pruner The stable version of `HyperbandPruner` is available with a simpler interface and improved performance. <img src="https://user-images.githubusercontent.com/5983694/88739399-6be88200-d175-11ea-9985-ce8c71d9538f.png" width="540px"> Hyperparameter Importance The stable version of the hyperparameter importance module is available. - Our implementation of fANOVA, `FanovaImportanceEvaluator` is now the default importance evaluator. This replaces the previous requirement for [`fanova`](https://github.com/automl/fanova) with scikit-learn. - A new importance visualization function `visualization.plot_param_importances`. ![image7](https://user-images.githubusercontent.com/5983694/88739415-7440bd00-d175-11ea-829f-5dc2aee57e9d.png) Built-in CMA-ES Sampler The stable version of `CmaEsSampler` is available. This new `CmaEsSampler` can be used with pruning for major performance improvements. Grid Sampler The stable version of `GridSampler` is available through an intuitive interface for users familiar with Optuna. When the entire grid is exhausted, the optimization stops automatically, so you can specify `n_trials=None`. LightGBM Tuner The stable version of `LightGBMTuner` is available. The behavior regarding `verbosity` option has been improved. The random seed was fixed unexpectedly if the verbosity level is not zero, but now the user given seed is used correctly. Experimental Features - New integration modules: TensorBoard integration, Catalyst integration, and AllenNLP pruning integration are available as experimental. - A new visualization function for multi-objective optimization: `multi_objective.visualization.plot_pareto_front` is available as an experimental feature. - New methods to manually create/add trials: `trial.create_trial` and `study.Study.add_trial` are available as experimental features. Breaking Changes Several deprecated features (e.g., `Study.study_id` and `Trial.trial_id`) are removed. See 1346 for details. - Remove deprecated features in `optuna.trial` (1371) - Remove deprecated arguments from `LightGBMTuner` (1374) - Remove deprecated features in `integration/chainermn.py` (1375) - Remove deprecated features in `optuna/structs.py` (1377) - Remove deprecated features in `optuna/study.py` (1379) Several features are deprecated. - Deprecate `optuna study optimize` command (1384) - Deprecate `step` argument in `IntLogUniformDistribution` (1387, thanks nzw0301!) Other. - `BaseStorage.set_trial_param` to return `None` instead of `bool` (1327) - Match `suggest_float` and `suggest_int` specifications on `step` and `log` arguments (1329) - `BaseStorage.set_trial_intermediate_valute` to return `None` instead of `bool` (1337) - Make `optuna.integration.lightgbm_tuner` private (1378) - Fix pruner index handling to 0-indexing (1430, thanks bigbird555!) - Continue to allow using `IntLogUnioformDistribution.step` during deprecation (1438) - Align `LightGBMTuner` verbosity level to the original LightGBM (1504) New Features - Add snippet of API for integration with Catalyst (1056, thanks VladSkripniuk!) - Add pruned trials to trials being considered in `CmaEsSampler` (1229) - Add pruned trials to trials being considered in `SkoptSampler` (1431) - Add TensorBoard integration (1244, thanks VladSkripniuk!) - Add deprecation decorator (1382) - Add `plot_pareto_front` function (1303) - Remove experimental decorator from `HyperbandPruner` (1435) - Remove experimental decorators from hyperparameter importance (HPI) features (1440) - Remove experimental decorator from `Study.stop` (1450) - Remove experimental decorator from `GridSampler` (1451) - Remove experimental decorators from `LightGBMTuner` (1452) - Introducing `optuna.visualization.plot_param_importances` (1299) - Rename `integration/CmaEsSampler` to `integration/PyCmaSampler` (1325) - Match `suggest_float` and `suggest_int` specifications on `step` and `log` arguments (1329) - `optuna.create_trial` and `Study.add_trial` to create custom studies (1335) - Allow omitting the removal version in `deprecated` (1418) - Mark `CatalystPruningCallback` integration as experimental (1465) - Followup TensorBoard integration (1475) - Implement a pruning callback for AllenNLP (1399) - Remove experimental decorator from HPI visualization (1477) - Add `optuna.visualization.plot_edf` function (1482) - `FanovaImportanceEvaluator` as default importance evaluator (1491) - Reduce HPI variance with default args (1492) Enhancements - Support automatic stop of `GridSampler` (1026) - Implement fANOVA using `sklearn` instead of `fanova` (1106) - Add a caching mechanism to make `NSGAIIMultiObjectiveSampler` faster (1257) - Add `log` argument support for `suggest_int` of skopt integration (1277, thanks nzw0301!) - Add `read_trials_from_remote_storage` method to Storage implementations (1298) - Implement `log` argument for `suggest_int` of pycma integration (1302) - Raise `ImportError` if `bokeh` version is 2.0.0 or newer (1326) - Fix the x-axis title of the hyperparameter importances plot (1336, thanks harupy!) - `BaseStorage.set_trial_intermediate_valute` to return `None` instead of `bool` (1337) - Simplify log messages (1345) - Improve layout of `plot_param_importances` figure (1355) - Do not run the GC after every trial by default (1380) - Skip storage access if logging is disabled (1403) - Specify `stacklevel` for `warnings.warn` for more helpful warning message (1419, thanks harupy!) - Replace `DeprecationWarning` with `FutureWarning` in `deprecated` (1428) - Fix pruner index handling to 0-indexing (1430, thanks bigbird555!) - Load environment variables in `AllenNLPExecutor` (1449) - Stop overwriting seed in `LightGBMTuner` (1461) - Suppress progress bar of `LightGBMTuner` if `verbosity` == 1 (1460) - RDB storage to do eager backref "join"s when fetching all trials (1501) - Overwrite intermediate values correctly (1517) - Overwrite parameters correctly (1518) - Always cast choices into tuple in `CategoricalDistribution` (1520) Bug Fixes RDB Storage Bugs on Distributed Optimization are Fixed Several critical bugs are addressed in this release with the RDB storage, most related to distributed optimization. - Fix CMA-ES boundary constraints and initial mean vector of LogUniformDistribution (1243) - Temporary hotfix for `sphinx` update breaking existing type annotations (1342) - Fix for PyTorch Lightning v0.8.0 (1392) - Fix exception handling in `ChainerMNStudy.optimize` (1406) - Use `step` to calculate range of `IntUniformDistribution` in `PyCmaSampler` (1456) - Avoid exploding queries with large exclusion sets (1467) - Temporary fix for problem with length limit of 5000 in MLflow (1481, thanks PhilipMay!) - Fix race condition for trial number computation (1490) - Fix `CachedStorage` skipping trial param row insertion on cache miss (1498) - Fix `_CachedStorage` and `RDBStorage` distribution compatibility check race condition (1506) - Fix frequent deadlock caused by conditional locks (1514) Installation - [Backport] Add `packaging` in install_requires (1561) - Set `python_requires` in `setup.py` to clarify supported Python version (1350, thanks harupy!) - Specify `classifiers` in setup.py (1358) - Hotfix to avoid latest `keras` 2.4.0 (1386) - Hotfix to avoid PyTorch Lightning 0.8.0 (1391) - Relax `sphinx` version (1393) - Update version constraints of `cmaes` (1404) - Align `sphinx-rtd-theme` and Python versions used on Read the Docs to CircleCI (1434, thanks harupy!) - Remove checking and alerting installation `pfnopt` (1474) - Avoid latest `sphinx` (1485) - Add `packaging` in install_requires (1561) Documentation - Fix experimental decorator (1248, thanks harupy!) - Create a documentation for the root namespace `optuna` (1278) - Add missing documentation for `BaseStorage.set_trial_param` (1316) - Fix documented exception type in `BaseStorage.get_best_trial` and add unit tests (1317) - Add hyperlinks to key features (1331) - Add `.readthedocs.yml` to use the same document dependencies on the CI and Read the Docs (1354, thanks harupy!) - Use `Colab` to demonstrate a notebook instead of `nbviewer` (1360) - Hotfix to allow building the docs by avoiding latest `sphinx` (1369) - Update layout and color of docs (1370) - Add FAQ section about OOM (1385) - Rename a title of reference to a module name (1390) - Add a list of functions and classes for each module in reference doc (1400) - Use `.. warning::` instead of `.. note::` for the deprecation decorator (1407) - Always use Sphinx RTD theme (1414) - Fix color of version/build in documentation sidebar (1415) - Use a different font color for argument names (1436, thanks harupy!) - Move css from `_templates/footer.html` to `_static/css/custom.css` (1439) - Add missing commas in FAQ (1458) - Apply auto-formatting to `custom.css` to make it pretty and consistent (1463, thanks harupy!) - Update `CONTRIBUTING.md` (1466) - Add missing `CatalystPruningCallback` in the documentation (1468, thanks harupy!) - Fix incorrect type annotations for `catch` (1473, thanks harupy!) - Fix double `FrozenTrial` (1478) - Wider main content container in the documentation (1483) - Add `TensorBoardCallback` to docs (1486) - Add description about zero-based numbering of `step` (1489) - Add links to examples from the integration references (1507) - Fix broken link in `plot_edf` (1510) - Update docs of default importance evaluator (1524) Examples - Set `timeout` for relatively long-running examples (1349) - Fix broken link to example and add README for AllenNLP examples (1397) - Add whitespace before opening parenthesis (1398) - Fix GPU run for PyTorch Ignite and Lightning examples (1444, thanks arisliang!) - Add Stable-Baselines3 RL Example (1420, thanks araffin!) - Replace `suggest_*uniform` in examples with `suggest_(int|float)` (1470) Tests - Fix `plot_param_importances` test (1328) - Fix incorrect test names in `test_experimental.py` (1332, thanks harupy!) - Simplify decorator tests (1423) - Add a test for `CmaEsSampler._get_trials()` (1433) - Use argument of `pytorch_lightning.Trainer` to disable `checkpoint_callback` (1453) - Install RDB servers and their bindings for storage tests (1497) - Upgrade versions of `pytorch` and `torchvision` (1502) - Make HPI tests deterministic (1505) Code Fixes - Introduces `optuna._imports.try_import` to DRY optional imports (1315) - Friendlier error message for unsupported `plotly` versions (1338) - Rename private modules in `optuna.visualization` (1359) - Rename private modules in `optuna.pruners` (1361) - Rename private modules in `optuna.samplers` (1362) - Change `logger` to `_trial`'s module variable (1363) - Remove deprecated features in `HyperbandPruner` (1366) - Add missing `__init__.py` files (1367, thanks harupy!) - Fix double quotes from Black formatting (1372) - Rename private modules in `optuna.storages` (1373) - Add a list of functions and classes for each module in reference doc (1400) - Apply deprecation decorator (1413) - Remove unnecessary exception handling for `GridSampler` (1416) - Remove either `warnings.warn()` or `optuna.logging.Logger.warning()` from codes which have both of them (1421) - Simplify usage of `deprecated` by omitting removed version (1422) - Apply experimental decorator (1424) - Fix the experimental warning message for `CmaEsSampler` (1432) - Remove `optuna.structs` from MLflow integration (1437) - Add type hints to `slice.py` (1267, thanks bigbird555!) - Add type hints to `intermediate_values.py` (1268, thanks bigbird555!) - Add type hints to `optimization_history.py` (1269, thanks bigbird555!) - Add type hints to `utils.py` (1270, thanks bigbird555!) - Add type hints to `test_logging.py` (1284, thanks bigbird555!) - Add type hints to `test_chainer.py` (1286, thanks bigbird555!) - Add type hints to `test_keras.py` (1287, thanks bigbird555!) - Add type hints to `test_cma.py` (1288, thanks bigbird555!) - Add type hints to `test_fastai.py` (1289, thanks bigbird555!) - Add type hints to `test_integration.py` (1293, thanks bigbird555!) - Add type hints to `test_mlflow.py` (1322, thanks bigbird555!) - Add type hints to `test_mxnet.py` (1323, thanks bigbird555!) - Add type hints to `optimize.py` (1364, thanks bigbird555!) - Replace `suggest_*uniform` in examples with `suggest_(int|float)` (1470) - Add type hints to `distributions.py` (1513) - Remove unnecessary `FloatingPointDistributionType` (1516) Continuous Integration - Add a step to push images to Docker Hub (1295) - Check code coverage in `tests-python37` on CircleCI (1348) - Stop building Docker images in Pull Requests (1389) - Prevent `doc-link` from running on unrelated status update events (1410, thanks harupy!) - Avoid latest `ConfigSpace` where Python 3.5 is dropped (1471) - Run unit tests on GitHub Actions (1352) - Use `circleci/python` for dev image and install RDB servers (1495) - Install RDB servers and their bindings for storage tests (1497) - Fix `dockerimage.yml` format (1511) - Revert 1495 and 1511 (1512) - Run daily unit tests (1515) Other - Add TestPyPI release to workflow (1291) - Add PyPI release to workflow (1306) - Exempt issues with `no-stale` label from stale bot (1321) - Remove stale labels from Issues or PRs when they are updated or commented on (1409) - Exempt PRs with `no-stale` label from stale bot (1427) - Update the documentation section in `CONTRIBUTING.md` (1469, thanks harupy!) - Bump up version to `2.0.0` (1525) ``` ### 2.0.0rc0 ``` A release candidate for the second major version of Optuna [v2.0.0-rc0](https://github.com/optuna/optuna/milestone/25?closed=1) is released! This release includes a lot of new features, cleaned up interfaces, performance improvements, internal refactorings and more. If you find any problems with this release candidate, please feel free to report them via GitHub Issues or Gitter. Highlights Hyperband Pruner The stable version of `HyperbandPruner` is available. It has a more simple interface and has seen performance improvement. Hyperparameter Importance The stable version of the hyperparameter importance module is available. - Our own implemented fANOVA, `FanovaImportanceEvaluator`. While the previous implementation required [`fanova`](https://github.com/automl/fanova), this new `FanovaImportanceEvaluator` can be used with only scikit-learn. - A new importance visualization function `visualization.plot_param_importances`. Built-in CMA-ES Sampler The stable version of `CmaEsSampler` is available. This new `CmaEsSampler` can be used with pruning, one of the Optuna’s important features, for great performance improvements. Grid Sampler The stable version of `GridSampler` is available and can be through an intuitive interface for users familiar with Optuna. When the entire grid is exhausted, the optimization also automatically stops so you can specify `n_trials=None`. LightGBM Tuner The stable version of `LightGBMTuner` is available. The behavior regarding `verbosity` option has been improved. The random seed was fixed unexpectedly if the verbosity level is not 0, but now the user given seed is used correctly. Experimental Features - New integration modules: TensorBoard integration and Catalyst integration are available as experimental. - A new visualization function for multi-objective optimization: `multi_objective.visualization.plot_pareto_front` is available as an experimental feature. - New methods to manually create/add trials: `trial.create_trial` and `study.Study.add_trial` are available as experimental features. Breaking Changes Several deprecated features (e.g., `Study.study_id` and `Trial.trial_id`) are removed. See 1346 for details. - Remove deprecated features in `optuna.trial`. (1371) - Remove deprecated arguments from `LightGBMTuner`. (1374) - Remove deprecated features in `integration/chainermn.py`. (1375) - Remove deprecated features in `optuna/structs.py`. (1377) - Remove deprecated features in `optuna/study.py`. (1379) Several features are deprecated. - Deprecate `optuna study optimize` command. (1384) - Deprecate `step` argument in `IntLogUniformDistribution`. (1387, thanks nzw0301!) Other. - `BaseStorage.set_trial_param` to return `None` instead of `bool`. (1327) - Match `suggest_float` and `suggest_int` specifications on `step` and `log` arguments. (1329) - `BaseStorage.set_trial_intermediate_valute` to return `None` instead of `bool`. (1337) - Make `optuna.integration.lightgbm_tuner` private. (1378) - Fix pruner index handling to 0-indexing. (1430, thanks bigbird555!) - Continue to allow using `IntLogUnioformDistribution.step` during deprecation. (1438) New Features - Add snippet of API for integration with Catalyst. (1056, thanks VladSkripniuk!) - Add pruned trials to trials being considered in `CmaEsSampler`. (1229) - Add pruned trials to trials being considered in `SkoptSampler`. (1431) - Add TensorBoard integration. (1244, thanks VladSkripniuk!) - Add deprecation decorator. (1382) - Add `plot_pareto_front` function. (1303) - Remove experimental decorator from `HyperbandPruner`. (1435) - Remove experimental decorators from hyperparameter importance (HPI) features. (1440) - Remove experimental decorator from `Study.stop`. (1450) - Remove experimental decorator from `GridSampler`. (1451) - Remove experimental decorators from `LightGBMTuner`. (1452) - Introducing `optuna.visualization.plot_param_importances`. (1299) - Rename `integration/CmaEsSampler` to `integration/PyCmaSampler`. (1325) - Match `suggest_float` and `suggest_int` specifications on `step` and `log` arguments. (1329) - `optuna.create_trial` and `Study.add_trial` to create custom studies. (1335) - Allow omitting the removal version in `deprecated`. (1418) - Mark `CatalystPruningCallback` integration as experimental. (1465) - Followup TensorBoard integration. (1475) Enhancements - Support automatic stop of `GridSampler`. (1026) - Implement fANOVA using `sklearn` instead of `fanova`. (1106) - Add a caching mechanism to make `NSGAIIMultiObjectiveSampler` faster. (1257) - Add `log` argument support for `suggest_int` of skopt integration. (1277, thanks nzw0301!) - Add `read_trials_from_remote_storage` method to Storage implementations. (1298) - Implement `log` argument for `suggest_int` of pycma integration. (1302) - Raise `ImportError` if `bokeh` version is 2.0.0 or newer. (1326) - Fix the x-axis title of the hyperparameter importances plot. (1336, thanks harupy!) - `BaseStorage.set_trial_intermediate_valute` to return `None` instead of `bool`. (1337) - Simplify log messages. (1345) - Improve layout of `plot_param_importances` figure. (1355) - Do not run the GC after every trial by default. (1380) - Skip storage access if logging is disabled. (1403) - Specify `stacklevel` for `warnings.warn` for more helpful warning message. (1419, thanks harupy!) - Replace `DeprecationWarning` with `FutureWarning` in `deprecated`. (1428) - Fix pruner index handling to 0-indexing. (1430, thanks bigbird555!) - Load environment variables in `AllenNLPExecutor`. (1449) - Stop overwriting seed in `LightGBMTuner`. (1461) Bug Fixes - Fix CMA-ES boundary constraints and initial mean vector of LogUniformDistribution. (1243) - Temporary hotfix for `sphinx` update breaking existing type annotations. (1342) - Fix for PyTorch Lightning v0.8.0. (1392) - Fix exception handling in `ChainerMNStudy.optimize`. (1406) - Use `step` to calculate range of `IntUniformDistribution` in `PyCmaSampler`. (1456) Installation - Set `python_requires` in `setup.py` to clarify supported Python version. (1350, thanks harupy!) - Specify `classifiers` in setup.py. (1358) - Hotfix to avoid latest `keras` 2.4.0. (1386) - Hotfix to avoid PyTorch Lightning 0.8.0. (1391) - Relax `sphinx` version. (1393) - Update version constraints of `cmaes`. (1404) - Align `sphinx-rtd-theme` and Python versions used on Read the Docs to CircleCI. (1434, thanks harupy!) - Remove checking and alerting installation `pfnopt`. (1474) Documentation - Fix experimental decorator. (1248, thanks harupy!) - Create a documentation for the root namespace `optuna`. (1278) - Add missing documentation for `BaseStorage.set_trial_param`. (1316) - Fix documented exception type in `BaseStorage.get_best_trial` and add unit tests. (1317) - Add hyperlinks to key features. (1331) - Add `.readthedocs.yml` to use the same document dependencies on the CI and Read the Docs. (1354, thanks harupy!) - Use `Colab` to demonstrate a notebook instead of `nbviewer`. (1360) - Hotfix to allow building the docs by avoiding latest `sphinx`. (1369) - Update layout and color of docs. (1370) - Add FAQ section about OOM. (1385) - Rename a title of reference to a module name. (1390) - Add a list of functions and classes for each module in reference doc. (1400) - Use `.. warning::` instead of `.. note::` for the deprecation decorator. (1407) - Always use Sphinx RTD theme. (1414) - Fix color of version/build in documentation sidebar. (1415) - Use a different font color for argument names. (1436, thanks harupy!) - Move css from `_templates/footer.html` to `_static/css/custom.css`. (1439) - Add missing commas in FAQ. (1458) - Apply auto-formatting to `custom.css` to make it pretty and consistent. (1463, thanks harupy!) - Update `CONTRIBUTING.md`. (1466) - Add missing `CatalystPruningCallback` in the documentation. (1468, thanks harupy!) - Fix incorrect type annotations for `catch`. (1473, thanks harupy!) Examples - Set `timeout` for relatively long-running examples. (1349) - Fix broken link to example and add README for AllenNLP examples. (1397) - Add whitespace before opening parenthesis. (1398) - Fix GPU run for PyTorch Ignite and Lightning examples. (1444, thanks arisliang!) Tests - Fix `plot_param_importances` test. (1328) - Fix incorrect test names in `test_experimental.py`. (1332, thanks harupy!) - Simplify decorator tests. (1423) - Add a test for `CmaEsSampler._get_trials()`. (1433) - Use argument of `pytorch_lightning.Trainer` to disable `checkpoint_callback`. (1453) Code Fixes - Introduces `optuna._imports.try_import` to DRY optional imports. (1315) - Friendlier error message for unsupported `plotly` versions. (1338) - Rename private modules in `optuna.visualization`. (1359) - Rename private modules in `optuna.pruners`. (1361) - Rename private modules in `optuna.samplers`. (1362) - Change `logger` to `_trial`'s module variable. (1363) - Remove deprecated features in `HyperbandPruner`. (1366) - Add missing `__init__.py` files. (1367, thanks harupy!) - Fix double quotes from Black formatting. (1372) - Rename private modules in `optuna.storages`. (1373) - Add a list of functions and classes for each module in reference doc. (1400) - Apply deprecation decorator. (1413) - Remove unnecessary exception handling for `GridSampler`. (1416) - Remove either `warnings.warn()` or `optuna.logging.Logger.warning()` from codes which have both of them. (1421) - Simplify usage of `deprecated` by omitting removed version. (1422) - Apply experimental decorator. (1424) - Fix the experimental warning message for `CmaEsSampler`. (1432) - Remove `optuna.structs` from MLflow integration. (1437) - Add type hints to `slice.py`. (1267, thanks bigbird555!) - Add type hints to `intermediate_values.py`. (1268, thanks bigbird555!) - Add type hints to `optimization_history.py`. (1269, thanks bigbird555!) - Add type hints to `utils.py`. (1270, thanks bigbird555!) - Add type hints to `test_logging.py`. (1284, thanks bigbird555!) - Add type hints to `test_chainer.py`. (1286, thanks bigbird555!) - Add type hints to `test_keras.py`. (1287, thanks bigbird555!) - Add type hints to `test_cma.py`. (1288, thanks bigbird555!) - Add type hints to `test_fastai.py`. (1289, thanks bigbird555!) - Add type hints to `test_integration.py`. (1293, thanks bigbird555!) - Add type hints to `test_mlflow.py`. (1322, thanks bigbird555!) - Add type hints to `test_mxnet.py`. (1323, thanks bigbird555!) - Add type hints to `optimize.py`. (1364, thanks bigbird555!) Continuous Integration - Add a step to push images to Docker Hub. (1295) - Check code coverage in `tests-python37` on CircleCI. (1348) - Stop building Docker images in Pull Requests. (1389) - Prevent `doc-link` from running on unrelated status update events. (1410, thanks harupy!) - Avoid latest `ConfigSpace` where Python 3.5 is dropped. (1471) Other - Add TestPyPI release to workflow. (1291) - Add PyPI release to workflow. (1306) - Exempt issues with `no-stale` label from stale bot. (1321) - Remove stale labels from Issues or PRs when they are updated or commented on. (1409) - Exempt PRs with `no-stale` label from stale bot. (1427) ```
Links - PyPI: https://pypi.org/project/optuna - Changelog: https://pyup.io/changelogs/optuna/ - Homepage: https://optuna.org/