issues
search
fabio-sim
/
LightGlue-ONNX
ONNX-compatible LightGlue: Local Feature Matching at Light Speed. Supports TensorRT, OpenVINO
Apache License 2.0
375
stars
34
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Conversion of lightglue's model to tensorRT format with trtexec failed
#98
emmmmm196
opened
1 week ago
0
long inference time of using TensorrtExecutionProvider
#97
noahzn
opened
1 week ago
0
Could you tell me your versions of tensorrt and cuDNN? I encountered compatibility issues, thank you
#96
XL634663985
closed
3 weeks ago
2
TracerWarning: torch.tensor results are registered as constants in the trace.
#95
zhj296409022
opened
1 month ago
1
feat: Add OpenVINO
#94
fabio-sim
closed
1 month ago
0
OPENVINO
#93
pinnintipraneethkumar
closed
1 month ago
2
match extractor_type
#92
Fantesim
closed
1 month ago
1
feat: pure TensorRT example
#91
fabio-sim
closed
1 month ago
0
Lightglue produces wrong matches after TensorRT optimization
#90
PeppeFacoltativo
closed
1 month ago
2
QKV Error when keypoint number is small
#89
biggiantpigeon
closed
1 month ago
1
onnx vs pth performance
#88
JaouadROS
closed
2 months ago
3
feat: DISK end-to-end batch pipeline
#87
fabio-sim
closed
4 months ago
0
Match indices out of range
#86
llschloesser
closed
3 months ago
2
Convert 2.0 End-to-end parallel ONNX file to TensorRT engine
#85
EvW1998
closed
1 month ago
4
fix: BC torch version < 2.4
#84
fabio-sim
closed
4 months ago
0
support for different sizes of inputs
#83
GreatZeZuo
closed
1 month ago
3
feat: Dynamic batch
#82
fabio-sim
closed
4 months ago
0
Key points from thesuperpoint
#81
l333wang
closed
1 month ago
3
the possiblity of supporting batch input
#80
noahzn
closed
4 months ago
12
ONNX support for lightglue-sift
#79
lvfengkun
opened
5 months ago
2
error when using TensorrtExecutionProvider for ONNX models
#78
noahzn
closed
5 months ago
18
convert SuperPoint from onnx to engine
#77
midskymid
closed
4 months ago
1
False Positive Keypoints on uniform Images
#76
DavideCatto
closed
5 months ago
3
UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 17
#75
BayRanger
closed
6 months ago
3
Use providers argument for extractor
#74
davidtvs
closed
7 months ago
1
LightGlue
#73
Ddd195
closed
4 months ago
0
how to convert onnx into an rknn model?
#72
ouxiand
closed
6 months ago
8
Adding support for SIFT
#71
demplo
opened
7 months ago
3
".trt.onnx"export example
#70
WYKXD
closed
5 months ago
1
ALIKED Support
#69
mug1wara26
opened
8 months ago
2
some problem that delopy on device, such as snapdragon snpe.
#68
guker
closed
5 months ago
1
Error when building the end2end onnx file.
#67
najingligong1111
closed
4 months ago
1
Internal Error (/lightglue/ArgMax)
#66
chenscottus
closed
7 months ago
4
error when convert tensorRt engine model
#65
long-senpai
closed
4 months ago
4
Converting a trained model to ONNX
#63
ikaftan
closed
7 months ago
4
Result diffrent from the original repository?
#62
1191658517
closed
9 months ago
3
Running inference throws a CUDA exception
#61
laxnpander
closed
4 months ago
1
Running inference using exported models in C++ very unstable/non-deterministic
#60
will-kudan
closed
7 months ago
2
The output shape of lightglue's onnx model is dynamic. Does tensorrt support dynamic output?
#59
weihaoysgs
closed
11 months ago
4
Build Superpoint onnx file to Tensorrt engine failed, encounter Error (Could not find any implementation for node {ForeignNode[/Flatten.../Transpose_3]}
#58
weihaoysgs
closed
11 months ago
5
feat: LightGlue TensorRT engine inference
#57
fabio-sim
closed
11 months ago
0
Is the lightglue run in the tensorrt mode, and how to using C++ inferface build engine?
#56
weihaoysgs
closed
11 months ago
6
aliked
#55
sushi31415926
closed
11 months ago
2
support dynamic batch size
#54
WalkerWen
closed
11 months ago
1
jetson
#53
sushi31415926
closed
7 months ago
1
[softMaxV2Runner.cpp::execute::226] Error Code 1: Cask (shader run failed)
#52
fighterzzzh
closed
7 months ago
2
NOT_IMPLEMENTED : Non-zero status code returned while running MultiHeadAttention node
#51
dmoti
closed
1 year ago
3
我在jetson agx上运行infer.py,花费19s才完成,是我的环境有问题吗?tensorrt的版本一定要是8.6吗
#50
Ddd195
closed
7 months ago
1
How do I export without specifying an img size
#49
SpenceraM
closed
1 year ago
1
optim: topk trick
#47
fabio-sim
closed
1 year ago
0
Next