Ritchizh / RGBD-Integration-2020

Applying Open3D functions to integrate experimentally measured color and depth frames into a 3D object.
62 stars 10 forks source link

Some reconstructed results #5

Open ethyd4 opened 1 year ago

ethyd4 commented 1 year ago

Hello mam, While, Scanning the small objects it was showing some incorrect reconstruct results. Some of the reconstructed results were Screenshot from 2023-08-21 15-42-35 Screenshot from 2023-08-28 12-12-14 Screenshot from 2023-08-28 10-21-51 A_hat1_color_frame0 (2) A_hat1_color_frame0 (1) A_hat1_color_frame0

ethyd4 commented 1 year ago

If I'm scanning any of the medium objects like human , chair , Tins. For these kind of objects I'm getting some decent results. When If I scan small objects it was not giving the proper results. can you give some suggestions to improve the results.

Ritchizh commented 1 year ago

Hi! 1) RealSense depth camera has 1-6 cm depth resolution (depending on distance), so probably the camera's accuracy is not enough for finer small models? 2) Try filtering out clutter more accurately - remove everything which is not the object from the point cloud. 3) Try stricter convergence criterion, by reducing RANSACConvergenceCriteria parameters.

ethyd4 commented 1 year ago

Hello mam, In the point cloud data for each frame we were getting some amount of incorrect data to remove that If I use statistical , radius outliers removal methods at that time I'm loosing the good data also.

For Intel real sense D415 the capturing limits were 0.3m to 10m. It needs to get all the details within that distance. I think there was no issue with the camera.

can you suggest a better way to remove the incorrect data from the point cloud data.

Ritchizh commented 1 year ago

Statistical and radius outliers removal methods are effective. You can also use a bounding box to cut your object from the point cloud as I have shown in the example: bounds = [[-np.inf, np.inf], [-np.inf, 0.15], [0, 0.56]] # set the bounds

Your distance range should be OK for D415, however, what I meant - the distance error is +/- 2%, you won't see all fine texture details in the shape.

Unfortunately, I don't know the reason why your reconstructed shapes look so distorted. Are only the final meshes distorted or the Ransac merged point clouds too?

ethyd4 commented 1 year ago

We are getting good results for objects bigger than 1ft or 30 cm in measurement. This might be because of sensor resolution.We were thinking of using Lidar instead of depth camera, we plan on going with intel realsense 515l maybe that will give us better result for small objects. Lidar directly gives pointcloud data as well as RGB, will the algorithm you designed work with intel realsense 515L lidar camera, and what changes we need to make for that. A request, can you try scanning small objects, or else we can provide the dataset for the same. Please let us know where we are going wrong.