aeon-toolkit / aeon

A toolkit for machine learning from time series
https://aeon-toolkit.org/
BSD 3-Clause "New" or "Revised" License
1.02k stars 128 forks source link

[ENH] Move performance metrics to benchmarking #2319

Closed TonyBagnall closed 2 weeks ago

TonyBagnall commented 2 weeks ago

I dont feel strongly about this, its ok for them to remain where they are if anyone does, but on balance I favour keeping the root as slim as possible and encapsulating where we can, performance_metrics seems better in benchmarking to me.

update: pycharm refactoring is so bad!

aeon-actions-bot[bot] commented 2 weeks ago

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ $\color{#FEF1BE}{\textsf{enhancement}}$ ]. I would have added the following labels to this PR based on the changes made: [ $\color{#264F59}{\textsf{benchmarking}}$, $\color{#FBCA04}{\textsf{visualisation}}$ ], however some package labels are already present.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Slack channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

TonyBagnall commented 2 weeks ago

refactoring too tedious