Open tesorrells opened 2 years ago
You could try adding this:
if isinstance(o, torch.Tensor):
o = o.cpu().numpy()
into yolor/utils/plots.py at output_to_target
So the final function will be like this:
def output_to_target(output, width, height):
# Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
if isinstance(output, torch.Tensor):
output = output.cpu().numpy()
targets = []
for i, o in enumerate(output):
if o is not None:
if isinstance(o, torch.Tensor):
o = o.cpu().numpy()
for pred in o:
box = pred[:4]
w = (box[2] - box[0]) / width
h = (box[3] - box[1]) / height
x = box[0] / width + w / 2
y = box[1] / height + h / 2
conf = pred[4]
cls = int(pred[5])
targets.append([i, cls, x, y, w, h, conf])
return np.array(targets)
I found that this code has been corrected in the source code, but mine still reports this error.
Look closely, there are 2 places that require the change, only 1 place is corrected. It is working for me on paper branch.
@aliencaocao which other place requires this change
I found it via stack trace, cannot remember exactly where now, but it should be obvious enough to see the stack trace
Not sure if anyone else has run into this, but when using the distributed gpu option and a batch size over 30, I get this.