I'm developing new submissions on OGBL datasets. Many existing submissions evaluate the model on the full validation and test sets each epoch and choose the best model parameters corresponding to the best validation metric. As the model I'm working on is very costly, can I choose the best model parameters by evaluation on a subset of the validation edges and using a different K for Hits@K? I will still report the required metrics on the full validation and test sets with the best model selected.
Hi OGB team,
I'm developing new submissions on OGBL datasets. Many existing submissions evaluate the model on the full validation and test sets each epoch and choose the best model parameters corresponding to the best validation metric. As the model I'm working on is very costly, can I choose the best model parameters by evaluation on a subset of the validation edges and using a different K for Hits@K? I will still report the required metrics on the full validation and test sets with the best model selected.
Thanks, Yakun