Closed swamidass closed 11 months ago
I did find this blog post,
https://www.comet.com/site/blog/building-reliable-machine-learning-models-with-cross-validation/
Is the approach noted here "best practice"? Notably, it trains folds in the same thread, so it does not allow for distributed training, where each fold is trained in a different process and aggregated in the end.
Thanks for the update, but this is outside the scope of the Comet issue tracker. Good luck with your ML training!
Before Asking:
What is your question related to?
What is your question?
What is the best practices for tracking cross-validation experiments? Should there be a hyper parameter added for each fold? Should steps be reset to zero for each fold, or not? Should each fold be a different experiment? Does Comet have any built in logic/intelligence for managing x-fold-validation?
What have you tried?
There are several ways to store training information from each training fold's run, but it is not clear how to do this in a way that enables easy grouping and plotting of data across all folds or individually.
What are the best practices ways of tracking multiple folds in a cross-validation experiment using Comet ML?