Closed blueskywwc closed 3 years ago
Hi,
ultralytics's implementation of Letterbox
use a function of OpenCV
which cannot be traced or scripted, so I'm curious that how you implement the LetterboxImage
?
Made some changes based on your code:
std::vector
float in_h = input_size.height;
float in_w = input_size.width;
float scale = std::min(in_w / src_w, in_h / src_h);
cout<<"scale:"<<scale<<endl;
int mid_h = static_cast<int>(round(src_h * scale));
int mid_w = static_cast<int>(round(src_w * scale));
int dw=in_w-mid_w;
int dh=in_h-mid_h;
int p_w = dw%32/2;
int p_h = dh%32/2;
cv::resize(src, dst, cv::Size(mid_w, mid_h));
// int top = (static_cast
int top = (int)round(p_h-0.1) ;
int bottom = (int)round(p_h+0.1);
int left = (int)round(p_w-0.1);
int right = (int)round(p_w+0.1);
cout<<"tblr:"<<top<<" "<<bottom<<" "<<left<<" "<<right<<endl;
cv::copyMakeBorder(dst, dst, top, bottom, left, right, cv::BORDER_CONSTANT, cv::Scalar(114, 114, 114));
std::vector<float> pad_info{static_cast<float>(left), static_cast<float>(top), scale};
return pad_info;
}
after modification: img_input.size:640 x 480
Hi @blueskywwc , Ops, I'm not the author of this awesome repo :)
I guess that the question was caused by the variable input image size.
EDIT: The output image size of ultralytics's Letterbox
is variable, but torchscript model exported by torch.jit.trace
only recognize determined size. Replace utltralytics's torch.jit.trace
in export.py
to torch.jit.script
maybe a method I guess.
Yes, I just saw this post: https://github.com/ultralytics/yolov5/issues/1406 Preparing to try
Yep two modifications may be sufficient
model.model[-1].export = False
torch.jit.trace
to torch.jit.script
EDIT: there maybe an error for their implementation of torch.cat
model.model[-1].export = False # set Detect() layer export=True y = model(img) # dry run
# TorchScript export
try:
print('\nStarting TorchScript export with torch %s...' % torch.__version__)
f = opt.weights.replace('.pt', '.torchscript.pt') # filename
# ts = torch.jit.trace(model, img)
ts = torch.jit.script(model, img)
ts.save(f)
print('TorchScript export success, saved as %s' % f)
except Exception as e:
print('TorchScript export failure: %s' % e)
after modification: TorchScript export failure: Tried to access nonexistent attribute or method 'add' of type 'torch.utils.activations.Hardswish'. Did you forget to initialize an attribute in init()?: File "/home/wangwc/PycharmProjects/paper_yolov5/utils/activations.py", line 19 def forward(x):
return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX
~~~~~ <--- HERE
I remember this error. torch.jit.trace
can be exported successful, but it didn't support variable input image size? (please point it out if I am wrong here)
ultralytics's original implementation cannot be torch.jit.script
. And now they don't support this method, there is a third-party implementation in my own repo. I'm not sure this is the "right" implementation.
OK, I'll keep trying and update later
---Original--- From: "Zhiqiang Wang"<notifications@github.com> Date: Mon, Dec 7, 2020 18:05 PM To: "yasenh/libtorch-yolov5"<libtorch-yolov5@noreply.github.com>; Cc: "Mention"<mention@noreply.github.com>;"blueskywwc"<1360142382@qq.com>; Subject: Re: [yasenh/libtorch-yolov5] Modify LetterboxImage error (#31)
I remember this error. torch.jit.trace can be exported successful, but it didn't support variable input image size? (please point it out if I am wrong here)
ultralytics's original implementation cannot be torch.jit.script. And now they don't support this method, there is a third-party implementation in my own repo. I'm not sure this is the "right" implementation.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
And let's have more thoughts from @yasenh .
Thanks @zhiqwang @blueskywwc Like zhiqwang mentioned above, torchscript model exported by torch.jit.trace only recognize determined size, so a workaround could be:
python models/export.py --weights yolov5s.pt --img 640 480 --batch 1
Yes you are right, the problem has been solved. Thanks @yasenh @zhiqwang Have you found that the inference time of C++ is longer than that of Python? What is the reason?
One more question here, the output image size of LetterBox
is variable, how do you handle this problem?
The size of all my input images is determined, the output image size of LetterBox is determined. You can adjust LetterBox (python) and LetterBox (c++) to be consistent to ensure the same prediction results,I don't know any good solutions.
The size of all my input images is determined, the output image size of LetterBox is determined.
Got it, thanks. I misunderstand it with autoshape
function in ultralytics.
@blueskywwc I noticed the same issue, but didn't get the root cause yet. Even the inference speed alone (without pre/post-processing) is slower.. let me know if you figure anything interesting
@yasenh tensor_img = tensor_img.permute({0, 3, 1, 2}).contiguous() is modified to tensor_img = tensor_img.permute({0, 3, 1, 2}); Preprocessing time will be reduced by almost half (just my test, you can refer to whether it is correct) There are no good findings in the inference process, and it is still trying
@blueskywwc I didn't notice obvious time reduction, did you tested on sync mode? E.g.:
CUDA_LAUNCH_BLOCKING=1 ./libtorch-yolov5 --source ../images/bus.jpg --weights ../weights/yolov5s.torchscript.pt --gpu
The time on the gpu is similar, but the time on the cpu is significantly reduced, you can try
Hello, thank you very much for your open source, it helped me a lot, I have a question: When the model input image size is 640640, the accuracy of the prediction result changes and the reasoning time becomes longer; then I modified LetterboxImage (refer to the python version), the model input image size is 640480, but the error is reported as follows:
terminate called after throwing an instance of 'std::runtime_error' what(): The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/models/yolo.py", line 45, in forward _35 = (_4).forward(_34, ) _36 = (_2).forward((_3).forward(_35, ), _29, ) _37 = (_0).forward(_33, _35, (_1).forward(_36, ), )