Closed NguyenThaiHoc1 closed 5 years ago
Question 1
You can capture an object as a point cloud. To capture all 360 degrees of the object though, you will have to do one of these techniques:
Use multiple cameras to capture point clouds from different viewpoints and then combine the point clouds together into a singke point cloud.
Move a single camera around the object. Or
Rotate the object 360 degrees using a method such as a turntable, or attaching the object to a rotating robot arm.
Question 2
The 400 Series cameras support post-processing processes to apply a range of filters to data, such as edge-preserving.
A good place to start is Intel's post-processing white paper document.
It is possible to use Google Translate to convert PDF documents like the white paper into your own language if you will find it easier that way.
https://communities.intel.com/message/570204#570204
Intel has also published a Python tutorial for applying post-processing filters.
https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb
Question 3
Do you mean that you want to create a solid 3D 'mesh' model of an object? If that is what you mean then yes, you can convert a point cloud into a 3D object. This can be done using 3D model software such as MeshLab or Blender.
There is also a program called LIPScan that works with the 400 Series and can capture a model directly.
Hi @MartyG-RealSense In question 1, you can give me a example code or something like ???? software and how to use ???
Question 2: what do you mean - filters (or post-processing filters.) ? Why do we need ?
Question 3: I try to using save file .ply (point cloud) from Intel-realSense-Viewer application. But i loading this file on MeshLab. it very bad and just display some a some area black ?
Sorry for my technique is not good Because I'm newbie in 3D-technology if you have a some document for newbie learn 3D-technology, you can send to me on this issue Thanks you for support
This link provides an introduction to stereo vision technology on Intel's RealSense blog, written by RealSense SDK Manager Sergey 'Dorodnic' Dorodnicov.
https://realsense.intel.com/stereo-depth-vision-basics/
Question 1
There is very little information available about using LIPScan with the 400 Series other than the YouTube video, as the LIPScan website mostly focuses on using LIPScan software with LIPScan's own depth camera.
My assumption from the YouTube video is that the LIPScan software may work with the 400 Series depth camera too, as the LIPScan software shown on their website looks the same as the one being used with the D415 in the YouTube video. The software requires Windows 10.
On the page linked to below, scroll down the page to find the software and user manual links.
https://www.lips-hci.com/product?product_id=16
Question 2
Filters are software functions that apply changes to camera data to alter it in some way that improves the results, such as reducing holes in the image.
Question 3
Would it be possible please to post a picture of your MeshLab import results here on this discussion so we can see them?
Hi @MartyG-RealSense I have some picture you can see
Picture i save ply from app Intel-RealSense-Viewer
Picture when i loading ply file on MeshLab. it is terrible
There is a guide to importing a ply created in RealSense Viewer into MeshLab. Looking at its images, it does seem that the ply looks worse after import than it did originally in the Viewer, but that the final 3D model becomes okay once you have applied the settings in the guide.
https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/
hi @MartyG-RealSense
I really appreciate your help with my project. But I realize that i need such RGB-D Scenes
How to I can create RGB-D sences like RGB-D washington Dataset (link below) ?
Washington dataset: https://rgbd-dataset.cs.washington.edu/
I'm doing my project but I have some issue thinking about hardware camera
Structure I practice, I have a problem about Depth dataset - Depth CNN (this is main reason I buy Intel D400 series) I hope i can create a lot of datasets which apply on this Model
I try to train only Depth Image (Depth image i use, i will post later) They have 600 image on 2 different angle capture. But I receive result which is very bad
Sorry for question concered about technique. I think Structure is ok and i very week on creating Depth dataset Because this is a first time I prepare dataset for myself
I have a question Intel D400 series different Kinect ? what is it ?
Thanks you for support all time, NguyenThaiHoc1
You are very welcome. I'm glad to be able to be of help.
The 400 Series cameras use Stereoscopic vision, with a left and right imager. Kinect 1 was based on Structured Light, which is similar to the RealSense SR300 camera model. Kinect 2 used a different system called Time of Flight.
Dorodnic the RealSense SDK Manager published a blog article explaining how stereo depth vision works.
https://realsense.intel.com/stereo-depth-vision-basics
I do not have any knowledge of using DNN though, so hopefully one of the Intel guys on this forum can give you the advice you need about that. Good luck!
[Realsense Customer Engineering Team Comment] Hi NguyenThaiHoc1, If there is no more question about this topic, we'd like to close this ticket. Thanks!
Dear Technique-Realsense support, I have some question about technique for developing on intel D435 Depth sensor
First question, Can I catch all shape of an object ? and How can i do ? (Point cloud ? or something)
Second question, How to display edge, shape of an object or surface of an object
Final question, Can i generate an object which i catch image?
I'm Python Developer (if you have the code better)
Sorry for my English skill is bad Thanks you for support NguyenThaiHoc1,