squaredev-io / whitebox

[Not Actively Maintained] Whitebox is an open source E2E ML monitoring platform with edge capabilities that plays nicely with kubernetes
https://squaredev.io/whitebox/
MIT License
184 stars 5 forks source link

Hyper parameters tuning for model #48

Open momegas opened 1 year ago

NickNtamp commented 1 year ago

The hyper-parameters tuning is pretty easy to be performed using Gridsearch.

There are some questions thought:

cc: @momegas , @gcharis , @stavrostheocharis

stavrostheocharis commented 1 year ago

Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?

Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?

The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)

Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?

momegas commented 1 year ago

I think its important to keep the target of Whitebox in mind. The target is monitoring not create models (at least not now) With this in mind, I think that we should either have a quick tuning or not at all. How I understood this issue was that it would be just some adjustments on the training. Not create a full other feature.

Think about this and if we can have just this in the timebox we have good. Otherwise, I would look at something else.

NickNtamp commented 1 year ago

Having some discussions with @stavrostheocharis , we concluded that the requirements of this task are still pretty blurry. I will try to simplify them with some simple questions below, so please @momegas - when you have the time, let us know.

  1. Do we wish to have some possibilities for a better model - predicting more accurate results? This means more accuracy during the explanability also.
  2. If no, we can close the ticket. If yes, how much time do we wish to sacrifice for performing the fine tuning searching for the best model - here could help also a metric threshold. For instance if we say to the model to iterate through 20 different combinations of hyper-parameters, in case of achieving an acceptable performance even in the 1st iteration, stop there and consider this as the best model.
  3. Do we wish in some way to keep track of the best hyper-parameters?
momegas commented 1 year ago

I think we should not spend more time on this as a better model will not give much value to WB at the moment since we are missing more core features. Feel free to close this if needed @NickNtamp

NickNtamp commented 1 year ago

Sure I can close the ticket @momegas . Before do it, I want to remind to both you and @stavrostheocharis that by not exploring combinations in order to increase the possibility of building a better model in an unknown dataset we accept the high risk of explaining a trash-model. Just imagine that we could build a model that has an accuracy of 20% and we will use it for our explainability feature.

stavrostheocharis commented 1 year ago

I would keep this as an issue in the backlog, in order to further investigate it and implement an enhancement in the future

momegas commented 1 year ago

It was actually requested! You are right. I will re-open this.

NickNtamp commented 1 year ago

We should explore alternatives like https://optuna.org/ here.