PINTO0309 / MobileNet-SSD-RealSense

[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering
https://qiita.com/PINTO
MIT License
366 stars 127 forks source link

Each Stick run different model #30

Open MaduJoe opened 5 years ago

MaduJoe commented 5 years ago

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
RaspberryPi3 b+ , NCS2 x 4 ,
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
armv7l [Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Rasbian [Required] Details of the work you did before the problem occurred:

I have four NCS2 and I`m trying to run different models on each neural stick independently.

For example, first neural stick run face detection, next one run emotion recognition, and third one run image classificaiton .. Is it possible ? I saw your MutiModel(FaceDetection, EmotionRecognition) which is merged model but what I want to do is what I said above.
I`m really appreciate your project Thanks

DogukanAltay commented 5 years ago

From https://software.intel.com/en-us/articles/transitioning-from-intel-movidius-neural-compute-sdk-to-openvino-toolkit :

Multiple NCS Devices
The NCSDK provided an API to enumerate all NCS devices in the system and let the application programmer run inferences on specific devices. With the OpenVINO™ toolkit Inference Engine API, the library itself distributes inferences to the NCS devices based on device load so that logic does not need to be included in the the application.

The key points when creating an OpenVINO™ toolkit application for multiple devices using the Engine API are:

- The application in general doesn't need to be concerned with specific devices or managing the workloads for those devices.
- The application should create a single PlugIn instance using the device string "MYRIAD". This plugin instance handles all "MYRIAD" devices in the system for the application. The NCS and Intel® NCS 2 are both "MYRIAD" devices as they are both based on versions of the Intel® Movidius™ Myriad™ VPU.
- The application should create an ExecutableNetwork instance for each device in the host system for maximum performance. However, there is nothing in the API that ties an ExecutableNetwork to a particular device.
- Multiple Inference Requests can be created for each ExecutableNetwork. These requests can be processed by the device with a level of parallelization that best works with the target devices. For Intel® NCS 2 devices, four inference requests for each Executable Network are the optimum number to create if your application is sensitive to inference throughput.

@MaduJoe As mentioned above, you don't need to explicitly tell which model should run on which NCS2.