Closed alphinside closed 4 years ago
is there any other way so that we could validate only once, rather than twice for each metrics and loss? probably get the output of model from predictor using forward hook?
currently i havent found any good design implementation about that, maybe imrpoved in the future issue i think
so, the current problem to not use the original basevalidator class is because the use of predictor right? why not use a forward hook?
so, the current problem to not use the original basevalidator class is because the use of predictor right? why not use a forward hook?
havent heard of this method before, so currently i have no idea yet on how to implement this and it's impact on our predictor design
you can learn it in this post https://blog.paperspace.com/pytorch-hooks-gradient-clipping-debugging
forward hooks basically is a function that is executed after the forward, so you save the output value inside the hook function to be used later on
Type of changes
Please check the type of change your PR introduces: - [ ] Bugfix - [x] Feature - [ ] Code style update (formatting, renaming) - [ ] Refactoring (no functional changes, no api changes) - [ ] Build related changes - [ ] Documentation content changes - [ ] Other (please describe): ## What is the current behavior? No mechanism to calculate loss on validation data - close #60 ## What is the new behavior? - Add validation loss to experiment logger - - ## Checklist[Unreleased]
Changelog