PR to take advantage of exp-metrics changes in rf-loss branch.
Original PR comments from @jxmorris12 :
Compute metrics in experiments class
Add PyTorch functions for metrics in metrics.py (that way metrics can run on GPU not CPU)
add metrics for Finetuning (TODO: add metrics for Retrofitting)
remove logic for metric computations in train.py
Support logging metrics multiple times per epoch via --logs_per_epoch arg
Add loggers everywhere (instead of simple print statements); log to both file & stdout
Add end-to-end tests for training
Add TensorRunningAverages object that can average metrics that are logged multiple times, so we can log the average to w&b instead of logging them many times. (Also adds associated file utils.py)
PR to take advantage of
exp-metrics
changes inrf-loss
branch.Original PR comments from @jxmorris12 :
metrics.py
(that way metrics can run on GPU not CPU)train.py
--logs_per_epoch
argutils.py
)