Open keremnymn opened 3 years ago
@keremnymn the default quantization backend doesn't support ConvTranspose, please set QUANTIZATION.BACKEND
to "qnnpack"
.
@wat3rBro after changing that parameter, running off the same code as above gives another error:
File "/home/pim/anaconda3/envs/d2/lib/python3.8/site-packages/d2go/export/api.py", line 100, in convert_and_export_predictor assert not fuse_utils.check_bn_exist(pytorch_model) AssertionError
Is there some bn setting that needs to be different for export as well?
edit:
Changing FBNET_V2 norm to "bn" and the others to "BN" (from https://github.com/facebookresearch/d2go/issues/39), does not help either.
@SuijkerbuijkP I see, I think it's because the roi mask head is not built with FBNet builder and currently quantization is incompatible with that head, we're working on a fix.
I can reproduce @SuijkerbuijkP findings.
Changing cfg.MODEL.MASK_ON = False
in order to not use the roi mask head, the script seems to hang indefinitely .
Both with and without cfg.QUANTIZATION.BACKEND = "qnnpack
Hi @wat3rBro,
Just writing here as I'm struck with the same Per channel weight observer is not supported yet for ConvTranspose{n}d
error too.
I fine-tuned a mask_rcnn_fbnetv3a_C4.yaml
model on my custom COCO style segmentation data set, and face the error while trying to export it as an Int8 model.
@smahesh2694 Hi we've updated the mask_rcnn_fbnetv3a_C4.yaml
(https://github.com/facebookresearch/d2go/commit/477ab964e2165cb586b5c00425f6e463d7edeadd) and now it should work with quantization using qnnpack. There's also a test for it https://github.com/facebookresearch/d2go/blob/2366ab940d6d87cc2b03f8a6c97d5fc9aed56c62/tests/modeling/test_meta_arch_rcnn.py#L39-L58
Hi @wat3rBro It is hard to test since patch_d2_meta_arch cant be imported in the current HEAD of master
I have the same issues when export with mask_rcnn_fbnetv3g_fpn.yaml. How can I solve this?
I've trained a custom Mask RCNN model and I'm trying to export that to Torchscript. I have
'model_final.pth'
file. This is the code I'm trying (I don't even know if this is true for custom training):The error I'm getting: