I think in video.py, line 134 and detect.py, lin 164:
scaling_factor = torch.min(416/im_dim,1)[0].view(-1,1)
is using a hard coded value for the image resolution (See standard value for the resolution parameter).
When I was using this implementation with my own yolov3 net the bounding boxes were not drawn in the proper locations since i set the parameter for resolution to 960.
Changing the lines to:
scaling_factor = torch.min(int(args.reso)/im_dim,1)[0].view(-1,1)
solved the problem for me.
Hi,
first of all thanks for the great tutorial!
I think in video.py, line 134 and detect.py, lin 164:
scaling_factor = torch.min(416/im_dim,1)[0].view(-1,1)
is using a hard coded value for the image resolution (See standard value for the resolution parameter).When I was using this implementation with my own yolov3 net the bounding boxes were not drawn in the proper locations since i set the parameter for resolution to 960.
Changing the lines to:
scaling_factor = torch.min(int(args.reso)/im_dim,1)[0].view(-1,1)
solved the problem for me.Best Regards,
Oliver