Closed yeluoo closed 1 year ago
Looking forward to seeing you support LoFTR soon
@xmba15 me too,I also think that the effect of LoFTR is better, and I strongly urge the author to integrate it @xmba15
@xmba15 Me too, i also find that LoFTR and LISRD are more stable and better effect!
Please stay tuned. WIP.
First attempt convert kornia LoFTR's torch weights to onnx did not succeed due to
feat_f0_unfold = F.unfold(feat_f0, kernel_size=(W, W), stride=stride, padding=W // 2)
raise TypeError("iteration over a 0-d tensor")
This probably relates to disscussion in here: https://github.com/kornia/kornia/issues/1504
and the PR in here: https://github.com/kornia/kornia/pull/1758
Please stay tuned. WIP.
That sounds exciting!hope you can make it.
First attempt convert kornia LoFTR's torch weights to onnx did not succeed due to
feat_f0_unfold = F.unfold(feat_f0, kernel_size=(W, W), stride=stride, padding=W // 2) raise TypeError("iteration over a 0-d tensor")
This probably relates to disscussion in here: kornia/kornia#1504
and the PR in here: kornia/kornia#1758
@xmba15 have you solved the problem? I meet the same error and i can't deal with it
@xmba15 did you solve it?
Hello, I found that LoFTR is not as effective as LISRD in matching large and small images, because the premise of LoFTR input is the same large image, such as [2, 1, 480, 640]. Can you support LISRD? In addition, you can set up to pay with Alipay
This problem occurred when converting the .onnx file.
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from /data01/pot/onnx_runtime_cpp/loftr.onnx failed:This is an invalid model. Type Error: Type 'tensor(bool)' of input parameter (onnx::Mul_2684) of operator (Mul) in node (Mul_2221) is invalid.
File "scripts/loftr/convert_to_onnx.py", line 54, in main sess = onnxruntime.InferenceSession("/data01/pot/onnx_runtime_cpp/loftr.onnx", providers=['CPUExecutionProvider']) File "/data01/software/anaconda3/envs/pytorch1.12/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/data01/software/anaconda3/envs/pytorch1.12/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 384, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
[2, 1, 480, 640]
LoFTR can accept any shapes as long as they are divisible by 8.
Type 'tensor(bool)' of input parameter (onnx::Mul_2684) of operator (Mul) in node (Mul_2221) is invalid.
You need to use the loftr repository that I provided as submodule here.
[submodule "scripts/loftr/LoFTR"]
path = scripts/loftr/LoFTR
url = https://github.com/xmba15/LoFTR
There are so many tricks related to this LoFTR so basically you need to follow all the instructions I have added in README.
Maybe more people will still struggle so I will post another easy way to convert loftr weights using docker:
cd onnx_runtime_cpp
git submodule update --init --recursive
docker pull xmba15/onnx_runtime_cpp:v1.10.0-ubuntu20.04
docker run --rm -it -v `pwd`:/workspace xmba15/onnx_runtime_cpp:v1.10.0-ubuntu20.04
python3 -m pip install -r scripts/loftr/requirements.txt
python3 scripts/loftr/convert_to_onnx.py --model_path /path/to/indoor_ds_new.ckpt
Running without docker is fine. But if you have trouble building your own environment, running with the above commands from docker may save you some time.
Or download the onnx weights from here. https://github.com/xmba15/onnx_runtime_cpp/releases/tag/0.0.3
Thanks, I'll try it.
Here we go. #39
Wow,that's great, can you provide a Alipay payment code.
More about LOFTR's trained indoor weights performance. https://github.com/xmba15/onnx_runtime_cpp/issues/40
LoFTR can be better than (SuperPoint+SuperGlue), but I don't think it is true for LISRD and (SuperPoint+SuperGlue). I also intended to support LoFTR at some point.