in your code you never convert the numpy arrays created in the datasets into pytorch tensors. Does the dataloader (or something else) do an automatic conversion?
I think that max_objs used in the datasets should be equal to K used to extract peaks in the heatmaps. For example, in coco dataset max_objs is set to 128 (this means that if there are more than 128 annotations for a certain image, they are discarded) while K is set by default to 100. Therefore if I have an image with >100 annotations, some of them will be discarded. Is my example correct?
I got some questions about the code:
you compute the mean after the computation of the loss, but in the loss functions you implemented the loss is always a scalar, what's the purpose?: https://github.com/xingyizhou/CenterNet/blob/1085662179604dd4c2667e3159db5445a5f4ac76/src/lib/trains/base_trainer.py#L70
in your code you never convert the numpy arrays created in the datasets into pytorch tensors. Does the dataloader (or something else) do an automatic conversion?
I think that
max_objs
used in the datasets should be equal toK
used to extract peaks in the heatmaps. For example, incoco
datasetmax_objs
is set to 128 (this means that if there are more than 128 annotations for a certain image, they are discarded) whileK
is set by default to 100. Therefore if I have an image with >100 annotations, some of them will be discarded. Is my example correct?