Skore lets you "Own Your Data Science." It provides a user-friendly interface to track and visualize your modeling results, and perform evaluation of your machine learning models with scikit-learn.
Is your feature request related to a problem? Please describe.
As a DataScientist, I want to be able to compare the several cross_validation I did.
Describe the solution you'd like
In a first iteration, to simplify things, the user will have to say when it's comparable. By default, everything lands in the same plot. As a user, I must be able to delete some runs (in case I ran too many of them).
The "information" panel must display:
the training dataframe name + its uuid
when relevant, the target name and its uuid
the estimator name & parameters
The dataframe & the target are not saved because it would be too heavy. However, to recognize them, we can save:
their hash
the following set {nb_col, nb_rows}. This is suboptimal, but good for an MVP. In an iteration, we can think about storing a skrub table report.
Describe alternatives you've considered, if relevant
Is your feature request related to a problem? Please describe.
As a DataScientist, I want to be able to compare the several cross_validation I did.
Describe the solution you'd like
In a first iteration, to simplify things, the user will have to say when it's comparable. By default, everything lands in the same plot. As a user, I must be able to delete some runs (in case I ran too many of them).
The "information" panel must display:
The dataframe & the target are not saved because it would be too heavy. However, to recognize them, we can save:
Describe alternatives you've considered, if relevant
No response
Additional context
No response