As per this comment on commit, when we compute the metrics (precision, recall, accuracy) during training, it freezes the setup.
The call to the update function during each step is costly (it slows down the training by about 30%, and I doubt that it's just because it's unoptimized) but it's non-blocking. On the contrary, the call to the compute function at the end of the epoch is blocking and never returns.
As per this comment on commit, when we compute the metrics (precision, recall, accuracy) during training, it freezes the setup.
The call to the update function during each step is costly (it slows down the training by about 30%, and I doubt that it's just because it's unoptimized) but it's non-blocking. On the contrary, the call to the compute function at the end of the epoch is blocking and never returns.
Help wanted!