Closed fkiraly closed 2 months ago
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Attention: Patch coverage is 91.66667%
with 1 line
in your changes missing coverage. Please review.
Project coverage is 90.19%. Comparing base (
0a1e784
) to head (5f50b20
).
Files with missing lines | Patch % | Lines |
---|---|---|
...sting/models/temporal_fusion_transformer/tuning.py | 91.66% | 1 Missing :warning: |
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
If you do not mind, I'll request you to consider these changes in this PR. I have two points to request you to consider.
In sktime
, we quite often face issues as a lot of dependencies are part of public all_extras
and their interdependcies and conflicts create weird situations in testing and debugging. I think it may be better to have a plan with everyone who are planning to help maintenance here, e.g. Benedikt, Xinyu, Felix etc.
pytorch-forecasting
is quite popular along with its tuning interface. If it suddenly looses those as part of main installation, I fear that may appear as a significant breaking change for a considerable part of its user base.
Ok, point taken.
Some comments and options for discussion.
pytorch-forecasting
had this policy before.optuna
. Namely, the additional soft dep optuna-integration
was suddelny required as optuna
reached 3.3.0. Users who went through this already have explicit install of optuna-integration
, which requires optuna
, so their code will not break.
optuna-integration
, which installs optuna
, which in turn prevents this code from breaking. Is this logically correct?sktime
deprecation strategy where we announce, say, 3 months in advanec that certain packages will move to be a soft deps. As said, pytorch-forecasting
users will be familiar with the situation (and likely watch for such messages) since optuna
did a similar thing recently, see above.Pinging everyone @yarnabrina mentioned: @benHeid, @XinyuWuu, @fnhirwa
On a more architectural level, and "defining scope", the tuner seems more like something that should be in optuna-integrations
rather than in pytorch-forecasting
proper. It's already apparent from the coupling which is very localized, and the general scope of content in pytorch-forecasting
. This is perhaps not a change we want to make, it's just an architectural statement from the a-priori perspective of "all other things being equal" (e.g., not having to worry about breaking code, separate package management processes, testing, etc, which we obviously have to).
The closest thing to making it actually external is making it a soft dependency, which mitigates things like integration, release, testing.
Side note: my assessment is different for the sktime
optuna
tuner - that one is sktime
API compliant, and sktime
has its dependency management layer, these two arguments make it less clear where it should live, although it could also live in optuna-integrations
and be indexed from sktime
.
On a more personal usage level, i.e., considering myself as an user and my opinions in that respect:
optuna
(unless of course it's a package primarily about tuning), since there are many similar packages and they all tend to have heavy-ish dependenciesmatplotlib
, since it's unrelated scope, it's front-end, so I cannot run the business logic without front-end dependencies. It's also a relatively heavy import.from discussion on discord: agreed on a compromise solution where the tuning soft deps go into a tuning
soft dependency set.
Isolates
optuna
,optuna-integration
, andstatsmodels
as soft dependencies in a new soft dep setall_extras
to collect all soft dependencies, and in another soft dep settuning
. Towards https://github.com/jdb78/pytorch-forecasting/issues/1616This was quite simple, since the imports happen in only one very narrow location, the
optimize_hyperparameters
utility which is also not core architecturally.