Closed hlaks-adl closed 3 years ago
Hi, What we mean is compute the sum of mean-squared loss of each vertex, and then normalize it by the diameter of the box.
dist = self._mean_squared_error_per_landmark(gt, pred) return dist / (self._length_per_instance(gt) + keras_backend.epsilon())
Here _length_per_instance means the distance between two particular pre-selected vertices of the box (the box's diagonal).
Hi, thanks for the info!
According to Table 3 in the Objectron paper, the loss function used in the two-stage pipeline is "Per vertex MSE normalized on diagonal edge length". I am trying to understand the different parts in this sentence. Could you share the equation or pseudo-code corresponding to this? It will help make the computation explicit. Thanks a lot!