nodefluxio / vortex

A Deep Learning Model Development Framework for Computer Vision
27 stars 6 forks source link

Validation loss #68

Closed alphinside closed 4 years ago

alphinside commented 4 years ago

Type of changes

Please check the type of change your PR introduces: - [ ] Bugfix - [x] Feature - [ ] Code style update (formatting, renaming) - [ ] Refactoring (no functional changes, no api changes) - [ ] Build related changes - [ ] Documentation content changes - [ ] Other (please describe): ## What is the current behavior? No mechanism to calculate loss on validation data - close #60 ## What is the new behavior? - Add validation loss to experiment logger - - ## Checklist
alphinside commented 4 years ago

is there any other way so that we could validate only once, rather than twice for each metrics and loss? probably get the output of model from predictor using forward hook?

currently i havent found any good design implementation about that, maybe imrpoved in the future issue i think

triwahyuu commented 4 years ago

so, the current problem to not use the original basevalidator class is because the use of predictor right? why not use a forward hook?

alphinside commented 4 years ago

so, the current problem to not use the original basevalidator class is because the use of predictor right? why not use a forward hook?

havent heard of this method before, so currently i have no idea yet on how to implement this and it's impact on our predictor design

triwahyuu commented 4 years ago

you can learn it in this post https://blog.paperspace.com/pytorch-hooks-gradient-clipping-debugging

triwahyuu commented 4 years ago

forward hooks basically is a function that is executed after the forward, so you save the output value inside the hook function to be used later on