Closed GreatZeZuo closed 2 months ago
Hi @GreatZeZuo, thank you for your interest in LightGlue-ONNX.
Dynamic shapes are supported. The default options in python dynamo.py export
already assume dynamic image sizes.
If you're exporting with a static input shape, then the only restriction imposed is that both H and W be integer multiples of 8. Technically this isn't a required condition, it's just something I noticed (that SuperPoint actually truncates to the nearest smaller multiple of 8 internally, so I figured it'd be better to expose this restriction to the user). Plus, generally certain operations like conv have better performance when inputs are clean multiples
Appreciate your quick reply. BTW, I'm trying C++ compilation right now and I'm having problems:ONNXRuntime environment created failed : Could not find an implementation for MultiHeadAttention(1) node with name '/matcher/transformers.0/self_attn/MultiHeadAttention. Do you know how to solve it? Thanks, again.
MultiHeadAttention is a contrib op available for ORT CPU & CUDA Execution providers. I think it's an issue with ORT version/platform
Hi, I'm a newbie and have this question: lightGlue's python model can support image inputs of different sizes, but how come the converted onnx model only supports inputs like 512, 1024? What should I do if I want to achieve support for different sizes of inputs. I would appreciate it if you give a quick reply!