Closed pollackscience closed 2 years ago
Hi, I'm a bot from the Ray team :)
To help human contributors to focus on more relevant issues, I will automatically add the stale label to issues that have had no activity for more than 4 months.
If there is no further activity in the 14 days, the issue will be closed!
You can always ask for help on our discussion forum or Ray's public slack channel.
Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.
Please feel free to reopen or open a new issue if you'd still like it to be addressed.
Again, you can always ask for help on our discussion forum or Ray's public slack channel.
Thanks again for opening the issue!
Search before asking
Ray Component
Ray Tune
What happened + What you expected to happen
I'm training a binary classifier for a massively imbalanced dataset. While doing hyperparameter search with ray.tune, I've noticed that the checkpointed 'best model' does not produce the listed score when run on the identical evaluation set. The difference can be very large.
Versions / Dependencies
python 3.8.10 ray 1.8.0 RHEL 8.4
Reproduction script
This is a sample script that kind-of mimics my use case. It's mainly based off of https://docs.ray.io/en/latest/tune/tutorials/tune-xgboost.html
Anything else
When I run this example,
ray.tune
reports the best AUC as 0.5266, but when I load the checkpoint and run on the test data, I get AUC=0.5179. When I run my actual data,tune
reports an AUC of 0.99+, but loading the "best" checkpoint gives me an AUC of ~0.5. If I then train a new model from scratch with the best hyperparams, my new model will get an AUC very close to thetune
report. I think having a massively imbalanced dataset worsens this issue, but I'm not sure. Could be related to #19173 , since it seems that checkpointed models are only saving for every 5th iteration.Are you willing to submit a PR?