Closed neilyoung closed 3 years ago
Hi @neilyoung Thanks very much for your depth camera questions.
Whilst there were a small number of past reports of issues with multi-camera setups on Jetson boards when using more than one camera, there have not been any further reports of that nature for months at the time of writing this.
DeepStream SDK is a subject that usually does not occur in RealSense forum discussions, so there is not much previous information about its use with RealSense to refer to. There was a recent case involving it on the RealSense ROS GitHub though.
https://github.com/IntelRealSense/realsense-ros/issues/1620
Retrieving pixel depth information does not require an IMU.
For the D435 / D435i cameras, the 3 meter point and beyond in their 10 meter depth sensing range is where error that increases linearly over distance (RMS Error) starts to become noticable. The D455 depth camera model has 2x the depth accuracy over distance of the D435 models though. So at 6 meters range the D455 has the same accuracy that the D435 models do at 3 meters. Please note though that the D455 has a larger minimum depth-sensing distance value of 0.4 meters.
Hi @MartyG-RealSense,
Nice to meet you again :) Still active here... And as usual promptly and informative.
Thanks for the helpful informations regarding 1, 3 and 4. Very much appreciated.
Regarding 2: Since I'm now already have a good bunch of experiences with DeepStream and I like it very much (it allows you to do a 30 fps inference per camera at 640x480 for three cameras on a Jetson Nano), I know, that it is just necessary to have the camera accessible via e.g. v42l-like GStreamer source caps definition. Ideally it would be great, if the depth stream would be provided the same way and simultaneously to the video, maybe on another device. Any known publications here?
You are very welcome @neilyoung
In the referenced RealSense ROS case, the RealSense user in that case believed that the problem was being caused by RTABMAP. So if you will be using ROS but your depth-camera AI project is not making use of RTABMAP then your DeepStream-equipped project may not be affected by the problem with /dev/video in that particular case.
If you are making use of V4L2 then building the librealsense SDK with the V4L2 backend, as described in the link below, may make the streams accessible to non-RealSense Linux tools.
https://github.com/IntelRealSense/librealsense/issues/6841#issuecomment-660859774
Thanks, will do. No I'm not using ROS at all.
Hi @neilyoung Do you require further assistance with this case, please? Thanks!
No thanks. Can be closed.
Hi,
Up to now I just have experiences with the T265. For an AI project I would now need one of the Intel depth cams. More specifically two of them are is about to be used on a Jetson Nano. Since the inference is done by the Nano I just need the distance information of one or more points of an object detected.
I'm now having these questions and kindly ask for suggestions:
1) From earlier posting over the years I have the impression, that it can be nightmarish to try to use more than one Intel camera at the same device. Is this still the case?
2) The Nvidia DeepStream SDK depends on GStreamer pipelines, so at least the raw video should be available as source with GStreamer support. What is the status here? I have read about some third parties, which did provide GST support. Any experiences?
3) I don't intend to do positioning or navigation or any point cloud SLAM thing. I just need the depth information per pixel. Do I need to have IMU support for that? I think no, but surprises are possible.
4) Lastly: What is a reliable range of depth detection? I have read Intel commits for 0.3 to 3 m. Is there more available? Is that realistic?
TIA