Closed CKocher closed 1 year ago
Thanks for your interest in this work, @CKocher.
As you may have noticed here and here, we use exactly the same learning rates and number of epochs across all of our tasks and domains.
We never found the need to perform any serious hyperparameter tuning, as for most domains, PLOI was quite stable (and was only ever trained on small problem instances). But it is quite possible that your problem domains are more involved, and need tuning -- in which case, I would recommend beginning with an experiment where you overfit to your training dataset (or at least, a few minibatches) for a sanity check.
(Closing, because the issue may not be directly relevant to the code. Happy to keep the dialogue going though)
Hello everyone,
thank you very much for sharing this code! I'm currently trying to use PLOI for my thesis but struggle a lot with Hyperparameter-Tuning for the PLOI approach with my custom data. As of now, I want to optimize the Number of epochs as well as the learning rate. Can you show me a way or maybe even some code? As of now I tried it with the Optuna library but I didn't manage to get it to work.
I'm fairly new to the field so any help would be super highly appreciated!
Greeting from Germany :)