Open gaocegege opened 4 years ago
/cc @simon-cj
emmm, except TensorRT and PMML, others is verified, they can be extract directly, PMML is ok in theory. For TenorRT, I need analysis further.
OK, we can sync the progress here.
Do we need to get the signature for TRT plan? I think it is only used for UI. If we cannot do it without running the model server, can we claim that we do not support TRT plan signature extraction?
Same question for PMML.
Do we need to get the signature for TRT plan? I think it is only used for UI. If we cannot do it without running the model server, can we claim that we do not support TRT plan signature extraction?
Same question for PMML.
PMML need to get the signature to extract params, eg: model inputs and outputs, but TRT is not clear, it need to discuss in clever 1.7.0, it is planning in clever 1.7.0.
SGTM
@simon-cj Is there any progress? I did not see the logic about extracting signatures from TRT plan. Then can we claim that we do not need run a model server to extract signatures?
model inference server mean trtis (triton) ? TRT plan has has some constraints:
Note: The generated plan files are not portable across platforms or TensorRT versions. Plans are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be re-targeted to the specific GPU in case you want to run them on a different GPU. https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
So if we want to extract its signatures,we should have the specific environment .
for binding in engine:
print ('Is INPUT:', engine.binding_is_input( binding ),\
'DIMS:', binding,engine.get_binding_shape( binding ),\
'DTYPE:', trt.nptype(engine.get_binding_dtype( binding )))
- - - - - - - - - -
OUTPUT:
Is INPUT: True DIMS: data (3, 224, 224) DTYPE: <class 'numpy.float32'>
Is INPUT: False DIMS: mobilenetv20_output_flatten0_reshape0 (1000,) DTYPE: <class 'numpy.float32'>
/assign @simon-cj
Is there any update?
/assign @judgeeeeee You implement the extract scripts, after that, I will integration with klever-model-registry
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
When we run the model conversion jobs, we have to setup a real model inference server first, which may be not necessary. We should investigate if we can get it directly similar to savedmodel_cli or some other tools.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: