Closed garceling closed 2 years ago
Hi @garceling The RealSense SDK has a TensorFlow compatibility wrapper for Python and examples for it that cover object detection and querying the XYZ coordinates of detected objects.
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/tensorflow
Hi, I was following the link you sent above. And i am a beginning so I am confused on which example shows the object detection. I need it to be above to detect the object and also label it as well.
Object detection is introduced in Part 1 of the examples.
I do not have information on label creation with the SDK's TensorFlow wrapper though. If labels are a vital requirement, there is alternatively a guide in the link below for using a RealSense D435 camera with TensorFlow to perform custom object detection with a point cloud and then label it.
https://github.com/jediofgever/PointNet_Custom_Object_Detection
The label creation guidance for this tutorial is in the article below:
https://github.com/jediofgever/PointNet_Custom_Object_Detection/blob/master/PREPARE_DATA.md
Hi, I finally got the object detection and distance measurement to work for the real depth camera. However, it can only measure distance accurately and recognize objects up to 30 m. Is this a issue with the program/ code I am using, or can the intel D345 only have a range of about 30 m.
The D435 camera model's official depth sensing maximum range is 10 meters, though accuracy starts to noticably drift beyond 3 meters from the camera. This is less of a problem when analysis is based on RGB images, which do not have that depth sensing limitation.
The D455 model has twice the accuracy over distance of the D435 and so has the same accuracy at 6 meters from the camera that the D435 / D435i has at 3 meters. The D455 has a minimum depth sensing distance of 0.4 meters though compared to the 0.1 m of D435/i, so is less suited to close-range depth sensing.
可以采用pytorch进行目标检测吗
Translation: "Can pytorch be used for target detection?"
Hello @Larry0607 The tutorial in the link below about target detection with pytorch and YOLOv3 may be helpful to you. Links to further guides can be found at the bottom of the article.
你好 @Larry0607 下面鏈接中關於使用 pytorch 和 YOLOv3 進行目標檢測的教程可能對你有所幫助。 可以在文章底部找到更多指南的鏈接。
Hello @Larry0607 Do you require further assistance with this case, please? Thanks!
你好@Larry0607 請問這個案例你需要進一步的幫助嗎? 謝謝!
sorry I have a question. I am trying to get the intel camera to detect and object and than measure the angle of the object from the camera. I have attached a link to the image of what I want to achieve. Do you know if this is possible with the intel camera?
Hi @garceling The resources in the link below will hopefully be helpful for answering your question.
Hi @garceling Do you require further assistance with this case, please? Thanks!
Hi, I was playing around with the intel camera and one thing i noticed was that the camera provides an accurate measurement of the distance of objects located at the center. However, for objects located towards the edge of the camera's frame of view, the distance measurement becomes inaccurate. Does this error have to do with my code, or is it related to the intel camera?
If you are using the get_distance instruction in Python then the effect that you describe is a known phenomenon. More information about this is in the link below
https://github.com/IntelRealSense/librealsense/issues/9134#issuecomment-852000394
Hi thank you, I was also wondering if it is possible to implement object tracking on a live video with the intel realesense camera.
The article from the official RealSense blog in the link below about different kinds of tracking is a useful introduction to the subject.
https://www.intelrealsense.com/types-of-tracking-overview/
After that article, the one in the link below about object tracking that provides example Python code would be worth looking at.
https://learnopencv.com/object-tracking-using-opencv-cpp-python/
Another approach would be to use a neural network that is trained to identify particular objects on an RGB video image and keep track of them, like the YOLOv4 and TensorFlow example for Python in the link below.
https://github.com/LeonLok/Multi-Camera-Live-Object-Tracking
Once you decide upon a preferred approach to object tracking, I will be happy to assist with further questions about that subject if you have them. Good luck!
Hi @garceling Do you require further assistance with this case, please? Thanks!
Yes sorry to bother you again, but in response to the inaccuracy of the get_distance function you linked me to this solution: https://github.com/IntelRealSense/librealsense/issues/7395#issuecomment-698016942 So i'm still confused. To fix the inaccuracy, do I just add the rs.align code that they have. Sorry, I am super new at this.
It's no trouble at all. The RealSense user in that link was suggesting to use the script that is beneath the line in that comments that says Please see below my final stream config code that makes me not receive this error.
If you need an example of Python depth to color alignment code that you can be confident is correct though, you could try the RealSense SDK's own official align-depth2color.py example script.
Okay thank you, I was wondering if it was possible to feed a pre-recorded video into the intel camera. I want it to be able to measure the distance of objects not just in real-time, but also through a pre-recorded video
You can record camera data into a bag file, which is like a video recording of camera data, and the RealSense SDK treats it as though it is a live camera feed.
Oh okay. So is this a way to convert a mp4 file into a bag file then.
You could read an mp4 video file into OpenCV, which the RealSense SDK is fully compatible with, and access the frames that way.
Hi, I just recorded a bag file. How do I edit my code, so that instead of taking a live stream, it will work with a bag file instead?
Programs that use live-streaming (including the SDK sample programs) can be modified to use a bag file as their data source instead of a live camera by adding the cfg.enable_device_from_file instruction to a script. An example of a Python version of this adaptation is below:
pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, "test.bag")
You could alternatively store the bag's path in a variable:
rs.config.enable_device_from_file(config, filename)
Hi @garceling Do you require further assistance with this case, please? Thanks!
Hi, I just have one more question. I am trying to measure the angle of an object from the intel camera. I found this code online:
https://github.com/IntelRealSense/librealsense/issues/5553#issuecomment-569464613
I was just wondering what HFOV and VFOV for the intel camera do I use. Do I use the depth FOV or the RBG sensor FOV?
The comment makes reference to the RGB sensor. The D435 uses the OmniVision 2740 sensor for RGB. Its HFOV and VFOV are listed in the table below.
Hi @garceling Do you require further assistance with this case, please? Thanks!
示例的第 1 部分介绍了对象检测。
不过,我没有关于使用 SDK 的 TensorFlow 包装器创建标签的信息。如果标签是一个重要的要求,下面的链接中还有一个指南,用于使用带有 TensorFlow 的 RealSense D435 相机来使用点云执行自定义对象检测,然后对其进行标记。
https://github.com/jediofgever/PointNet_Custom_Object_Detection
本教程的标签创建指南位于以下文章中:
https://github.com/jediofgever/PointNet_Custom_Object_Detection/blob/master/PREPARE_DATA.md
I ran the first program, object detection.py, with the following error. I checked my tensoeflow for errors. Would like to ask your opinion!
Hi @666tua https://github.com/tensorflow/tensorflow/issues/57375 suggests running the command below before running your code in order to resolve error in PredictCost().
!apt install --allow-change-held-packages libcudnn8=8.1.0.77-1+cuda11.2
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Hi, I have spent weeks trying to do object detection and distance measurement with the intel camera. Essentially, the camera will detect and label an object and measure its distance from the camera. The program must use Tensorflowlite for object detection. Has anyone done a similar project or found anything online to help me. I am stuck and desperately need help. I also cannot find any code online for real-time object detection with the intel camera and tensorflow. Does anyone have any ideas on how to get that to work? Thank you and I appreciate any help. >