when i try to convert pytorch .pth mode here is the warings,i can convert it to onnx ,but when interfence the results is wrong,how can you modify the code? thank you so much
TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
pred_boxes[..., 0] = pred_boxes[..., 0]+x.data + self.gridx
TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
pred_boxes[..., 2] = torch.exp(w.data) * self.anchor_w
when i try to convert pytorch .pth mode here is the warings,i can convert it to onnx ,but when interfence the results is wrong,how can you modify the code? thank you so much TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. pred_boxes[..., 0] = pred_boxes[..., 0]+x.data + self.gridx TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. pred_boxes[..., 2] = torch.exp(w.data) * self.anchor_w