Open kmn1024 opened 2 months ago
@kmn1024 Hi, thank you for issue. Could you provide yours custom pytorch model or model in OpenVINO IR after conversion (.xml and .bin files)? Upd. if it posiible please provide link on the storage with model
Thanks for looking, @allnes !
The .bin file: https://mega.nz/file/FalykSAS#IgHmpV_LGO56U1Cdeh2ko9Ggkj7hp9uiw9oyQI9ZAtM
The .xml is attached in the original post (as decoder2-openvino-xml.txt)
Thanks for looking, @allnes !
The .bin file: https://mega.nz/file/FalykSAS#IgHmpV_LGO56U1Cdeh2ko9Ggkj7hp9uiw9oyQI9ZAtM
The .xml is attached in the original post (as decoder2-openvino-xml.txt)
Thanks for model. I will return when I get some results.
@kmn1024 hi! Could you provide yours script of the model conversion? Because yours model (.xml and .bin) has an internal defect.
# Prepare pytorch model
...
decoder_model.eval()
import openvino as ov
ov_model = ov.convert_model(decoder_model,
input={
'd': ov.PartialShape([1, -1, 640]),
't_en': ov.PartialShape([1, 512, -1]),
'pred_aln_trg': ov.PartialShape([-1, -1]),
's': ov.Shape([1, 128]),
'ref': ov.Shape([1, 128]),
},
example_input=(d, t_en, pred_aln_trg, s, ref))
@allnes Can you please let me know if the above is what you need to help you debug, or do you need something else?
I can also do a bit of debugging on my end, if you can guide me on where to look.
@kmn1024 Alas, I could not reproduce this case, then I would like to ask you to build an OpenVINO library with DEBUG level and send us a stack trace of the network inference crash.
OpenVINO Version
2024.1.0
Operating System
Other (Please specify in description)
Device used for inference
CPU
Framework
None
Model used
Custom (a version of Hifi-GAN)
Issue description
This model is a version of Hifi-GAN with some customizations. I converted it from Pytorch on my desktop, which runs an Intel CPU on Ubuntu 22.04, using openvino-2024.1.0-15008-cp310-cp310-manylinux2014_x86_64.whl, using these instructions: https://github.com/openvinotoolkit/openvino/blob/74829b1ad22fdc5cd915bd0ec1bba5a4c20cfe08/docs/articles_en/openvino-workflow/model-preparation.rst#convert-a-model-with-python-convert_model
On the desktop, the model loads (
ov.compile_model
) and infers perfectly fine.However, if I move the model to an ARM-based edge computer (Orange Pi 5, which has A76+A55 CPU), with openvino-2024.1.0-15008-cp312-cp312-manylinux_2_31_aarch64.whl.metadata installed, the model loads but inference crashes. Stack trace of the crash:
I have attached the xml portion of the saved model: decoder2-openvino-xml.txt
Step-by-step reproduction
Difficult. The .bin portion of the saved model is about 90MB, so I cannot upload it. Please let me know if this is absolutely required.
Relevant log output
No response
Issue submission checklist