Closed 2050airobert closed 2 years ago
@2050airobert ,
You can convert model to tflite format then inference it with our tflite-vx-delegate. For 3399pro. we can support uint8_asymm quantized model natively. You don't have to convert it to rockchip format - tflite format is ok.
Thanks
@sunshinemyson The rockchip3399pro support uint8 quant model with pytorch qat quantanization,right? Do you know if I have the uint8 model quantanized with pytorch tool, how could I direnct make it run on the 3399pro NPU (verisilicon a311d NPU )?
anyone could help ,tks?
@2050airobert @sunshinemyson rk3399pro's NPU use VeriSilicon's ip core? is there a full chip list suppor VeriSilicon's ip core?
@2050airobert ,
If you have uint8 model from pytorch, you can convert it with AcuityLite tool to tflite, then you can run it with vx-delegate + tim-vx.
@2050airobert @sunshinemyson rk3399pro's NPU use VeriSilicon's ip core? is there a full chip list suppor VeriSilicon's ip core?
Most customer of VSI doesn't publish their chip for 3rd-party developer. Beside RK and AMLogic(Vim3), NXP 8mp dev-kit available at https://detail.tmall.com/item.htm?spm=a230r.1.14.3.412e177d828y5B&id=653946586608&ns=1&abbucket=19.
1 You mean not only pytorch pth or pt model could be converted to tf model,but pytorch uint8 model even could be converted to tflite? 2 Are you sure of that mentioned above? 3 Is there more problem in the converting process during converting pytorch uint8 model to tflite model, could you show more successful case or interal testcase ?
@2050airobert ,
Pytorch model -> ONNX Model -> Tflite Model is the only path we can support pytorch model with tim-vx today. I can not guarantee any model can be converted successfully, but if we can fix it if you can share the failure case.
Thanks.
Hi,
BR