Open seank-com opened 4 years ago
+1
Being able to add the device directly would allow for a lot of additional use cases. I would suggest that investigation into this feature also include both UVC based and on UVC based cameras. This would increase our abilities to support Industrial Machine Cameras. Example:
def get_camera(camera_id: Optional[str]) -> Camera:
with Vimba.get_instance() as vimba:
if camera_id:
try:
return vimba.get_camera_by_id(camera_id)
This issue is for a: (mark with an
x
)Minimal steps to reproduce
We are trying to use the Factory-AI-Vision sample, however, RTSP streams have noticeable lag (1-2 seconds in our tests). For our scenario we need as close to real-time as possible. Would it be possible to specify a camera that is directly connected to the host (nVidia JetsonNano) say with
host:0
or something instead of an RTSP stream URL when adding a camera?Expected/desired behavior
I've connected my camera to the Jetson in graphical mode and run the following code with the real-time performance we are looking for so I think it is technologically feasible.
OS and Version?
nVidia Jetson L4T Linux (Jetpack 4.4) running IoTEdge
Mention any other details that might be useful
A hint at the "HostConfig" settings to mimic
docker run --device=/dev/video0
in the manifest so I can expose the camera to InferenceModule (I assume its the InferenceModule) on IoTEdge would be helpful as well.