alibaba / MNN

MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
http://www.mnn.zone/
8.44k stars 1.63k forks source link

Facing issue with version incapitability between protobuffer library and MNN library while performing mnnquant quantization process using python API of mnn #2894

Open Nithinholalkere opened 1 month ago

Nithinholalkere commented 1 month ago

python3 mnnquant.py /home/kpit/Incab in_sensing_fy_24_25/Nithin/qnx/src/inferenceAppNew/inceptionV3_fp_32.mnn /home/kpit/Incabin_sensing_fy_24_25/Nithin/qnx/src/inferenceAp pOll/test.mnn /home/kpit/Incabin_sensing_fy_24_25/Nithin/qnx/src/imageInputConfig.json Traceback (most recent call last): File "/home/kpit/Incabin_sensing_fy_24_25/Nithin/Alibaba_MNN/MNN/pymnn/pip_package/MNN/tools/mnnquant.py", line 8, in import _tools as Tools ImportError: /home/kpit/anaconda3/lib/python3.11/site-packages/MNN-2.8.3-py3.11-linux-x86_64.egg/_tools.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN6google8protobuf8internal14ArenaStringPtr3SetENS2_15NonEmptyDefaultERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPNS0_5ArenaE

Nithinholalkere commented 1 month ago

There is no problem with the version of protobuffer since protobuffer version is ibprotoc 3.20.3

jxt1234 commented 1 month ago

It seems libc++ not the same

Nithinholalkere commented 1 month ago

The above issue is resolved, I tried using the pip version of the MNN and running the mnnquant.py , then it didn't throw the import error. However, it's now giving an aborted (core dumped) error in the middle of the quantization.

mnnquant test.mnn testquant.mnn /home/kpit/Incabin_sensing_fy_24_25/Nithin/qnx/src/imageInputConfig.json The device support i8sdot:0, support fp16:0, support i8mm: 0 The device support i8sdot:0, support fp16:0, support i8mm: 0 Aborted (core dumped)

jxt1234 commented 1 month ago

You can try to use gdb mnnquant to debug the crash stack. It may be caused by the image path has invalid picture. In the same time you can use mnnconvert and add --weightQuantBits=8 to only quant the weight. And then use MNN_LOW_MEMORY to enable dynamic quant.