Azure-Samples / azure-intelligent-edge-patterns

Samples for Intelligent Edge Patterns
MIT License
112 stars 135 forks source link

Access Physical cameras as well as RTSP #171

Open seank-com opened 4 years ago

seank-com commented 4 years ago

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

We are trying to use the Factory-AI-Vision sample, however, RTSP streams have noticeable lag (1-2 seconds in our tests). For our scenario we need as close to real-time as possible. Would it be possible to specify a camera that is directly connected to the host (nVidia JetsonNano) say with host:0 or something instead of an RTSP stream URL when adding a camera?

Expected/desired behavior

I've connected my camera to the Jetson in graphical mode and run the following code with the real-time performance we are looking for so I think it is technologically feasible.

import cv2

cap = cv2.VideoCapture(0)

if (cap.isOpened()== False): 
  print("Error opening video stream or file")

while(cap.isOpened()):
  ret, frame = cap.read()
  if ret == True:
    cv2.imshow('Frame',frame)
    if cv2.waitKey(25) & 0xFF == ord('q'):
      break
  else: 
    break

cap.release()
cv2.destroyAllWindows()

OS and Version?

nVidia Jetson L4T Linux (Jetpack 4.4) running IoTEdge

Mention any other details that might be useful

A hint at the "HostConfig" settings to mimic docker run --device=/dev/video0 in the manifest so I can expose the camera to InferenceModule (I assume its the InferenceModule) on IoTEdge would be helpful as well.

MSKeith commented 3 years ago

+1

Being able to add the device directly would allow for a lot of additional use cases. I would suggest that investigation into this feature also include both UVC based and on UVC based cameras. This would increase our abilities to support Industrial Machine Cameras. Example:

def get_camera(camera_id: Optional[str]) -> Camera:
    with Vimba.get_instance() as vimba:
        if camera_id:
            try:
                return vimba.get_camera_by_id(camera_id)