Closed henryruhs closed 1 month ago
What exactly are you trying to do at a high level?
The overridable initializer list are initializers that can be overridden at runtime. That requires the initializer to also be a graph input. Any initializer that doesn't have a matching initializer is constant.
Excuse me? Did you not read the issue description and usecase?
It's not clear from the description what you're trying to do with the initializer given those are typically used inside the model in a specific node.
There are use cases where they may be overridden at runtime (e.g. provide state from a previous run) which is what the overridable initializers are for.
It's not clear from the description what you're trying to do with the initializer given those are typically used inside the model in a specific node.
Not sure what I'm missing to make it clear. Examples and the usecase (InsightFace) are given.
There are use cases where they may be overridden at runtime (e.g. provide state from a previous run) which is what the overridable initializers are for.
We do not want to load the model twice as this is causing a huge performance impact if not cached. One time via onnxruntime, another via onnx.load()
just to get the initializer node.
Rather than only get_overridable_initializers()
, please expose all model.graph.initializer
via the onnxruntime API.
There are no guarantees that initializers will remain how/where you want them. They may get packed into a different layout that is more efficient for execution during session initialization or copied to a different device. Given that, an API to return the initializer data would be problematic.
Note also that the existing get_overridable_initializers
API returns the names and data types and not the initializer data.
2 options
1) Update the model so you don't need to use the embedding outside of it, as that usage seems to happen immediately prior to calling the model. https://github.com/deepinsight/insightface/blob/036b551071b2acff737715d8c193d43e8d5807be/python-package/insightface/model_zoo/inswapper.py#L51-L52
2) Store the embedding using the external data format so that it is a separate file to minimize the cost of needing 2 copies of the data. https://onnx.ai/onnx/api/external_data_helper.html
Thanks for the clarification and your patience to explain the underlying architecture.
I'm gonna close the issue for now as there seems to be no room to open the API for such usage or let's say direct model.graph access.
Describe the feature request
There is
get_overridable_initializers()
in theInferenceSession
but it mostly returns an empty list. In order to get the actual initializer I have to useonnx.load()
and load the model twice. On top of that this needs to be cached as the performance is bad while constantly load a model and extract a initializer.Describe scenario use case
I'm using
inswapper.onnx
which needs a initializer to create source face embedding for every frame. Our current solution is to have a cached method to handle it.I like to get rid of that and just be able to access this initializer via
get_available_initializers()
which simply exposesmodel.graph.initializer
.Original InsightFace code: https://github.com/deepinsight/insightface/blob/036b551071b2acff737715d8c193d43e8d5807be/python-package/insightface/model_zoo/inswapper.py#L16-L18