Open SyGoing opened 1 year ago
Thank you for your attention. I'm sorry I didn't see your issues before. I used defalut hybrid quantization(int8 and float32) on rv1109, because rv1109 don't support fp16. If convert neck_head to int8, I think precision will be lost.
Hello,Thank you for your wonderful work. The rknn model you converted with int8 or fp16? Since the models consist of backbone and neck_head, the neck_head's int8 convert is possible?