BerkeleyAutomation / gqcnn

Python module for GQ-CNN training and deployment with ROS integration.
https://berkeleyautomation.github.io/gqcnn
Other
306 stars 149 forks source link

Enabling OpenVINO #93

Open sharronliu opened 4 years ago

sharronliu commented 4 years ago

This PR enable OpenVINO™ technology for GQCNN. The original GQCNN models are converted to OpenVINO™ models, and deployed across various Intel devices for inference.

Evaluated OpenVINO™ models are: GQCNN-4.0-SUCTION, GQCNN-4.0-PJ, GQ-Suction, GQ-Bin-Picking-Eps90, GQ-Image-Wise. The evaluation is done in the same way as mentioned in the replication scripts under scripts/policies/.

I tested with Intel NUC6i7KYK at CPU and GPU. A list of supported device types are available here.

To replicate my results, the GQCNN OpenVINO™ models are available here for test purpose. These models were converted from the original pre-trained GQCNN models. Untar these models from gqcnn root: tar -zxvf gqcnn.models.OpenVINO.20191119.tgz. Or you may convert the GQCNN models by yourself, following the tutorials in this PR.

sharronliu commented 4 years ago

RandomUniform and Floor shown in the tensorboard visualization. random_uniform_floor

visatish commented 4 years ago

@sharronliu Okay, I think I know what's going on and you're right. So the RandomUniform and Floor ops are coming from the dropout layers. Our earlier experimentation with the NCS was using the FC-GQ-CNN, in which all of the layers are convolutions (the fully-connected ones are converted at inference time), which don't have a dropout applied afterwards. We never actually tried to port the original GQ-CNN and thus never experienced this issue.

Like you mentioned, the correct solution to this seems to be to remove the dropout layers in the inference graph. One option is to never include them entirely. The other is to post-process the graph definition and bypass those layers. I'm leaning towards the latter, however I'm a bit busy at the moment and will probably only get to it later this weekend. I'll let you know how it goes!

Thanks, Vishal

sharronliu commented 4 years ago

Vishal, thanks for your time and quick reply. I just added feedbacks to your previous comment on having an automated freeze().

Then some explanation on NCSDK, now also part of OpenVINO. Which implies, the installation of OpenVINO will support deployment on all devices (CPU, GPU, MYRIAD, FPGA). This https://github.com/BerkeleyAutomation/gqcnn/pull/93 was tested with MYRIADX also (with batch size forced to 1). BTW, I didn't mention FPGA in the yaml configure file, since I don't have the device to test it. For people who has been ever working on NCAPI, it would be very easy to understand the OpenVINO Inference Engine API. And more python example codes here.

Then regarding your question on FCGQCNN. I originally planned to support FCGQCNN also. While I met some issue when freezing the graph. Before getting a frozen FCGQCNN, this PR is tested for GQCNN so far.

sharronliu commented 4 years ago

@visatish , I'm doing some minor changes for this PR based on the latest OpenVINO release version 2020.2. In this version of OpenVINO, the Floor layer is supported, while RandomUniform is not supported due to this layer is considered not present in a deployment network. I will try to drop this layer for a frozen network.