Open ghost opened 5 years ago
Yeah I think a boolean on the Trainer and Evaluator as you've suggested offline makes sense to me. Will work on that after #83
Don't mean to harp on it but... I see this being significantly less useful without ids 😬 Because then you simply can't shuffle = False
, nor drop_last = True
, in BaseSampler
, right?
Actually, meh, nvm.
Let's say that we get 2/3 of the usefulness from just writing to disk, and the remaining 1/3 with ids.
With just the former, we can still do all the global aggregations.
That's a great idea! However, while we're at it: would it be possible to make this more general, and more user-controllable? I.e., could we maybe have some general "results-writer" that can be triggered for some things automatically, but that could also be included in arbitrary models?
E.g., something like an automatic boolean called "log" that's a wrapper around each model s.t. the input-output pairs of the nn.module will be stored? But additionally having something like a results_writer
object?
@cle-ros That s a sweet idea. I'll think about it after the next release!
Is your feature request related to a problem? Please describe.
Use-case: I use flambé to both debug models then grid-search over the stuff I'm happy with.
To debug, I often need to see the predictions the model is making. This includes (in a classification problem) the predicted index and a map from the index to its label.
Describe the solution you'd like
Some option in
Trainer
(re: predicting on the val set) andEvaluator
(re: predicting on the test set) that logs predictions for me in a thorough manner--all things I'd want to inspect offline, in other words. This would include: the inputs, the full predicted output, and the target.Thereafter, I would load this data into (say) a notebook, and start to inspect what's going on.