Open zetyquickly opened 4 years ago
Quantization is not currently supported. We only have internal support, (a very small) part of it open sourced.
Quantization is not currently supported. We only have internal support, (a very small) part of it open sourced.
Thank you @ppwwyyxx
Is this includes PyTorch/Caffe2 modification or it is written on top existing commits? It will be open sourced in the future, is there any pros for not doing double work now? I mean adding such support by ourselves
It does not include pytorch/caffe2 modifications. We would like to open source them eventually but we likely won't have time to work on this in the near (3 months) future.
btw, https://github.com/sstsai-adl/workshops/tree/master/LPCV_2020/uav_video_challenge contains a quantized CPU model for text detection. It may give an idea of what the model looks like in the end. They were released by a different team for an inference demo only.
btw, https://github.com/sstsai-adl/workshops/tree/master/LPCV_2020/uav_video_challenge contains a quantized CPU model for text detection. It may give an idea of what the model looks like in the end. They were released by a different team for an inference demo only.
Thanks for your concern, I'll add something on that.
For anyone who wants to export quantized detectron2 model in TorchScript
I would recommend do monkey patching of forward
, __init__
and fuse
functions of layers that are used in your meta architecture , e.g. GeneralizedRCNN
. Then do quantization preparation, fusion and do QAT or static quantization (see official docs). After that you can trace the model manually or use experimental TorchScript
tracing functionality of detectron2/export
. It should do the trick if you want to deploy in PC CPU
But if you are interested in mobile deployment of a model, for now TorchScript
is not a working approach
btw, https://github.com/sstsai-adl/workshops/tree/master/LPCV_2020/uav_video_challenge contains a quantized CPU model for text detection. It may give an idea of what the model looks like in the end. They were released by a different team for an inference demo only.
Thanks for your concern, I'll add something on that.
For anyone who wants to export quantized detectron2 model in
TorchScript
I would recommend do monkey patching offorward
,__init__
andfuse
functions of layers that are used in your meta architecture , e.g.GeneralizedRCNN
. Then do quantization preparation, fusion and do QAT or static quantization (see official docs). After that you can trace the model manually or use experimentalTorchScript
tracing functionality ofdetectron2/export
. It should do the trick if you want to deploy in PC CPUBut if you are interested in mobile deployment of a model, for now
TorchScript
is not a working approach
I want to deploy quantization model in PC, Do you have solved it?
Hi, @zhoujinhai
If you are ok with TorchScript
version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo
If you are interested in Caffe2
version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet.
Thanks I will try it
发自我的iPhone
------------------ Original ------------------ From: Emil Bogomolov <notifications@github.com> Date: Wed,Aug 5,2020 2:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496)
Hi, @zhoujinhai
If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Hi, @zhoujinhai
If you are ok with
TorchScript
version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested inCaffe2
version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet.
Have you tried with mask rcnn model?
Hello, @zhoujinhai
By the link that I attached you can see the quantization patching of GeneralizedRCNN
class which is the generalization of MaskRCNN architecture.
If you have further questions, please leave this issue's thread and ask them in the mentioned repo. I will try to answer them
Hi, @zhoujinhai
If you are ok with
TorchScript
version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested inCaffe2
version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet.
hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10.
It doesn't support windows
发自我的iPhone
------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496)
Hi, @zhoujinhai
If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet.
hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe
It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错
我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持
发自我的iPhone
------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496)
It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe
It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行
Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过
发自我的iPhone
------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496)
我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
好的,谢谢。暂时我还不知道怎么解决这个问题
你可以用pytorch检查windows下gpu是否可用 逐步排查下
发自我的iPhone
------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Wed,Sep 23,2020 8:39 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496)
Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
好的,谢谢。暂时我还不知道怎么解决这个问题
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
你可以用pytorch检查windows下gpu是否可用 逐步排查下 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Wed,Sep 23,2020 8:39 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 好的,谢谢。暂时我还不知道怎么解决这个问题 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
大佬我检查了。python下的pytorch以及caffe2是可以调用gpu的。哎,就是c++没有gpu的例子试一下
你转caffe2模型的时候有个参数要设置成cuda 默认是cpu
发自我的iPhone
------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Thu,Sep 24,2020 9:39 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496)
你可以用pytorch检查windows下gpu是否可用 逐步排查下 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Wed,Sep 23,2020 8:39 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 好的,谢谢。暂时我还不知道怎么解决这个问题 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
大佬我检查了。python下的pytorch以及caffe2是可以调用gpu的。哎,就是c++没有gpu的例子试一下
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
你转caffe2模型的时候有个参数要设置成cuda 默认是cpu 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Thu,Sep 24,2020 9:39 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 你可以用pytorch检查windows下gpu是否可用 逐步排查下 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Wed,Sep 23,2020 8:39 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 好的,谢谢。暂时我还不知道怎么解决这个问题 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 大佬我检查了。python下的pytorch以及caffe2是可以调用gpu的。哎,就是c++没有gpu的例子试一下 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
好像我都设置了。gpu模型的devicetype是1了,说明是gpu的模型
Gpu应该要配置cuda动态库还有probuf哪些 具体没在windows下弄过 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Fri,Sep 18,2020 8:49 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) 我是在ubuntu上运行的 windows我看官方部署文档貌似说过不支持 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Tue,Sep 15,2020 6:12 PM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe It doesn't support windows 发自我的iPhone … ------------------ Original ------------------ From: yang jinyi <notifications@github.com> Date: Mon,Sep 14,2020 9:37 AM To: facebookresearch/detectron2 <detectron2@noreply.github.com> Cc: zhoujinhai <932853432@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [facebookresearch/detectron2] Caffe2 export of quantized model (#1496) Hi, @zhoujinhai If you are ok with TorchScript version of your model, so please refer to reply above. The example of monkey patching I've mentioned is available in this repo If you are interested in Caffe2 version of your quantized model then I'm afraid that if this issue still persists means that such functionality haven't been added yet. hello,have you tried to test a trained model(.pb file ) on linux or windows10? I have tested on these platform, and finding that gpu model can not be ran successfully on caffe2 (c++) on windows10. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 感觉不对啊。我测试了支持windows,用cpu模型可以运行,但使用gpu模型在windows中运行失败。在workspace.runnetonce(init_model.pb)处报错 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. 我试了windows上cpu模型可以跑,gpu模型不行,不知是不是我caffe2在vs中的配置问题,还是模型问题,或是本身windows不支持gpu模型运行 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
大佬你有时间研究一下。我的能力不太够。毕竟windows下部署caffe2 c++还是很重要的。
I want to perform static quantization on ResNet backbone. I added QuantStub & DeQuantStub and fusion on this file inspired by torchvision quantized ResNet. I make my model on CPU and quantize in this way:
model1 = build_model(cfg) #config of resnet101
DetectionCheckpointer(model1, save_dir=cfg.OUTPUT_DIR).resume_or_load('X-101.pth', resume=True)
model = model1.backbone
model.eval()
torch.backends.quantized.engine = 'fbgemm'
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model.fuse_model()
torch.quantization.prepare(model, inplace=True)
sample_input = torch.rand(1, 3, 224, 224, device='cpu')
model(sample_input)
torch.quantization.convert(model, inplace=True)
model(sample_input)
But I got an error on the very last line
model(sample_input)
File "/home/anaconda3/envs/d2script/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/d2script/lib/python3.8/site-packages/detectron2/modeling/backbone/resnet.py", line 458, in forward
x = self.stem(x)
File "/home/anaconda3/envs/d2script/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/d2script/lib/python3.8/site-packages/detectron2/modeling/backbone/resnet.py", line 378, in forward
x = self.conv1(x)
File "/home/anaconda3/envs/d2script/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/d2script/lib/python3.8/site-packages/detectron2/layers/wrappers.py", line 76, in forward
x = F.conv2d(
RuntimeError: Could not run 'aten::thnn_conv2d_forward' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::thnn_conv2d_forward' is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
Is that because of deformable conv? How can I fix this error?
here's my env and system info:
PyTorch version 1.8
CUDA 11
python 3.8
OS: ubuntu
btw, https://github.com/sstsai-adl/workshops/tree/master/LPCV_2020/uav_video_challenge contains a quantized CPU model for text detection. It may give an idea of what the model looks like in the end. They were released by a different team for an inference demo only.
Thanks for your concern, I'll add something on that.
For anyone who wants to export quantized detectron2 model in
TorchScript
I would recommend do monkey patching offorward
,__init__
andfuse
functions of layers that are used in your meta architecture , e.g.GeneralizedRCNN
. Then do quantization preparation, fusion and do QAT or static quantization (see official docs). After that you can trace the model manually or use experimentalTorchScript
tracing functionality ofdetectron2/export
. It should do the trick if you want to deploy in PC CPUBut if you are interested in mobile deployment of a model, for now
TorchScript
is not a working approach
this repo has torchscript example, so torchscript with quantization works, d2go supports quantization torchscript mode.
Hello, everyone
First of all I've finally obtained results in a fight with ONNX export of quantized model built on detectron2 and trained using PyTorch QAT tools. So if anyone interested in guidance in this or in PR of additional functionality in detectron2 please write me back.
I understand that export of quantized model using
detectron2/export
tools might not be supported for now. But there are frequent mentions of INT8 in a code that's why I'll ask the following.While I do conversion after I see:
ONNX export Done. Exported predict_net (before optimizations)
Code logs an error:
The model takes floats as input and returns floats as outputs, inputs are implicitly quantized at the beggining of the network.
Is it possible to properly configurate export API functionality to export qunatized model? Is
device_option
assertion is vital in such exporting?