Open adamfarquhar opened 4 years ago
Hey Adam, this issue makes a very good point on why we should not use dicts as metrics for fastai, I'm on board with that.
The current implementation (using dicts) was always a quick fix in my mind, the better solution would be to convert each of the items in that dict to a separate fastai metric. Help from fastai experts would be appreciated here =)
What we need to do is write an Adapter that handles all of this, for instance, this is how the current adapter for fastai looks like:
class FastaiMetricAdapter(fastai.Metric):
def __init__(self, metric: Metric):
self.metric = metric
def reset(self):
pass
def accumulate(self, learn: fastai.Learner):
self.metric.accumulate(records=learn.records, preds=learn.converted_preds)
@property
def value(self) -> Dict[str, float]:
return self.metric.finalize()
@property
def name(self) -> str:
return self.metric.name
Note that this subclasses fastai.Metric
, the point of issue here is in def value
, fastai expects a float to be returned, but we're returning a dict.
To be clear, we should not change the return value of metric.finalize()
(this is the icevision implementation) but instead how to write an adapter that handles this difference (converting a dict of values to multiple fastai.Metric
instances.
A hotfix was implemented in #459, it only works when the metric returns a single value.
@property
def value(self) -> Dict[str, float]:
logs = self.metric.finalize()
return next(iter(logs.values()))
Hey guys, can you please assign this one to me?
As discussed on Discord, I think this would be a better match for me right now! Just for sake of clarity, I will unassign myself.
@potipot do you think this is still relevant?
The current adapter for fastai still returns a scalar. What's worse, in case of COCOMetric it only keeps track of the first one:
class FastaiMetricAdapter(fastai.Metric):
def __init__(self, metric: Metric):
self.metric = metric
def reset(self):
pass
def accumulate(self, learn: fastai.Learner):
self.metric.accumulate(preds=learn.converted_preds)
@property
def value(self) -> Dict[str, float]:
# return self.metric.finalize()
# HACK: Return single item from dict
logs = self.metric.finalize()
return next(iter(logs.values())) # <------------------- HERE
@property
def name(self) -> str:
return self.metric.name
I guess this is an issue that could be resolved on a bigger Metric refactor. As Lucas suggests, splitting multi-value metric to individual metrics for fastai compliance might be the right way here.
Ok, will keep this open then.
In FastAI, various callbacks can access and use recorded metrics. For example, the
SaveModelCallback
can save the model in the case that it is better than the previous best. It looks a the values in the learner's recorder for this information (learn.recorder.values). Normally, it expects to see a scalar value, but icevision appears to stash a dict there.For example, when using the cocometrics, learn.recorder.values[0][2] is a dict with a single key - {'mAPI' : 0.1234}. If the metrics were stashed directly, then the additional machinery would work. This would require a little additional adaptation to provide the metric name as well.
This callback can be used as-is, but only with valid_loss, training_loss rather than the metric (which is better aligned with the model's task performance).