Closed SkalskiP closed 7 months ago
Hi @SkalskiP! Can I work on this issue? I think I can implement this in the way you want without modifying update_with_tensors and in a pretty simple way.
Hi @rolson24 👋🏻! Feel free to submit PR. You will make me very happy.
The solution for this issue is implemented via https://github.com/roboflow/supervision/pull/1035. I am closing the issue. Thanks @rolson24 ! 🔥
Description
update_with_detections
returns the predicted position of boxes, not their actual coordinates received from the detector. Many users have complained about the deterioration of box quality when using ByteTrack. (#743)ByteTrack
does not work with segmentation models because masks are not transferred to theupdate_with_detections
output.Detections.data field
is lost after passing throughupdate_with_detections
.All these issues can be resolved by changing the logic in
update_with_detections
. Instead of mapping values obtained fromupdate_with_tensors
to newDetections
objects, we should use IoU to map the results ofupdate_with_tensors
to inputDetections
objects. This way, the inputxyxy
coordinates and the input state of themask
anddata
fields will be preserved.For this purpose, we can utilize the already existing function
box_iou_batch
. The matching procedure has been demonstrated in one of our videos on YouTube.Additional