Ritchizh / RGBD-Integration-2020

Applying Open3D functions to integrate experimentally measured color and depth frames into a 3D object.
62 stars 10 forks source link

Data Capturing #4

Closed ethyd03 closed 1 year ago

ethyd03 commented 1 year ago

Hello mam I seen your rgbd integration repository the dataset which you were taken is working perfectly. I took another dataset and tried to reconstruct it but is not giving the proper output. But whenever I'm using the online datasets in your respository it was giving the proper results. I recorded the two rosbag files using ros, Consists of both the color and the depth data after that I extracted the rgb and the depth data from those files and then performed the 3d reconstruction It was not giving the proper results can I know why it was happening ? can you provide the proper details how you were captured the data?

Ritchizh commented 1 year ago

Hi! Please take a look at this issue: https://github.com/Ritchizh/RGBD-Integration-2020/issues/2 - maybe some tips from there would help you. Can you show how your depth + color frames look or maybe share the data files?

ethyd03 commented 1 year ago

Last day I send all the details regarding how I capture the data

ethyd4 commented 1 year ago

Hello mam I attached some of the zip files consisting of the rgb and depth data. While when you go through those files you can understand how the data was.

I captured the data in two ways, one was using the ros and the other was using the python script having the logic which was present in the intel real sense repository. I directly capture the rgb , depth images using those scripts. While using the data which I captured rgb and depth directly from the Intel RealSense Camera D415. I didn't get the proper output Which means In that output I'm able to see the front side of the object. I'm unable to visualize the whole object. I think due to the background it happened. I attached two zip files consisting of the rgb , depth and the output which I got. In the bottlei.zip folder behind the object background was not removed in the depth data but, In the other zip file bottle removed background.zip. I removed the max amount of background from the depth data. I performed the 3d reconstruction on several datasets which were captured by my own. You can see different outputs in those zip files. Now, coming to other process(ros) of capturing data. On capturing data using the ros I'm able to visualize the data from the bagfile using the rqt_image_view. I have started capturing the data using ros by seeing your python script background_subtraction.vs2.py which was in rgbd integration 2020 repository. because you used rosbag files in that python script. So, I generated two ros bag files using my Intel real sense D415. In One ros bag file it contains both the color and depth data which was the subject bag file and the other ros bag file contains only colordata. while using these bag files in your script background_subtraction.vs2.py. I'm getting the Fileversion topic errors. From the rosbag files I extracted the rgb , depth data and performed the 3d reconstruction. But, It didn't give any output. I'm attaching the extracted data from the ros bag files also. The below zip file consists of the rgb and depth data which was extracted from the ros bag file. bottlei.zip bottleremoved background.zip

Ritchizh commented 1 year ago

I've got your data. 1) Depth frames are of low resolution. It would be better if you took a larger object. 2) Can you provide intrinsic parameters for your D415 camera (_stream.getintrinsic())? I've used D435, so they must be different.

ethyd4 commented 1 year ago

the camera intrinsc parameters were fx = 609.422 fy = 608.482 cx = 320.848 cy = 239.221

ethyd4 commented 1 year ago

Did you use the __background_subtraction.vs2.py script to get the better depth data by removing the background

Ritchizh commented 1 year ago

Yes, I used the script to isolate the object by removing background. But that may be not necessary: if you can remove background with distance truncation - it is even better.

Did you use these intrinsics when you tried to run the code? Are your color and depth streams aligned before you save the frames?

ethyd4 commented 1 year ago

In the bottle removed background zip file. I removed the background from the depth image but also it was not working properly

ethyd4 commented 1 year ago

Yes I used the same intrinsic parameters. I don't think so they are aligned. Would you be able to suggest a way on how to align them ?

Ritchizh commented 1 year ago

https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0#frame-alignment

Ritchizh commented 1 year ago

Dear ethyd4, please run this code line by line and make sure you understand all concepts: https://github.com/Ritchizh/RGBD-Integration-2020/blob/master/issue_4__Custom_data_ICP.py

Ritchizh commented 1 year ago

Expected result:

ethyd4 commented 1 year ago

Hello mam, I'm used the python statements which you provided in the script to remove the background from the point cloud data. After removing the background I'm getting the better ICP registration results between the two consecutive point clouds. But, when coming to the 3d reconstruction using the tsdf the output was not good.

output output1 screen1

screen2

Can you suggest the better way to reconstruct the objects?

ethyd4 commented 1 year ago

https://drive.google.com/file/d/1D1aJmKZurLNAvKPqVmft6MspKUF9SAhK/view?usp=sharing In the above link the python script was present which I used for the 3d reconstruction.

Ritchizh commented 1 year ago

Hi! Your code works perfectly fine for the given data :) What can be done next:

  1. Align depth and color streams (!) It is obvious from looking at your point clouds that the texture is misplaced:

  2. Better take another object which is not symmetrical (!) Right now reconstruction is based only on color features, geometry doesn't change during bottle rotation.

  3. Add path to specify where trajectory log is saved to be able to find it easily. #== Generate .log file from ICP transform:== traj_path = path_project + "test_segm.log" write_trajectory(traj, traj_path) camera_poses = read_trajectory(traj_path)

  4. Add depth truncation before rgbd images integration: background adds clutter to the final mesh. #== TSDF volume integration: == trunc = 0.56

ethyd4 commented 1 year ago

Can I know how you were find the bounding values for different objects to remove the background? like the below bounds = [[-np.inf, np.inf], [-np.inf, 0.15], [0, 0.56]] # set the bounds bounding_box_points = list(itertools.product(*bounds)) # create limit points bounding_box = o3d.geometry.AxisAlignedBoundingBox.create_from_points( o3d.utility.Vector3dVector(bounding_box_points)) # create bounding box object

Ritchizh commented 1 year ago

Bounding values are chosen individually in each case, if intrinsics are correct, these values have clear physical meaning: points further that 0.56 meters from the camera are removed. When you set up your experiment you can see how far away are the object and the background - then you just try different values until result is satisfactory. Of course there are methods to automatize background removal (like using Ransac plane fitting into the point cloud), but they can be tricky.

What do you plan to do next?

ethyd4 commented 1 year ago

I'm changing the code to capture the better rgb and depth images.

Now, I'm getting some what better results. On, making changes in the capturing code. Thank you for your incredible support.

Ritchizh commented 1 year ago

I'm glad you improved your results :) Let's close this issue, if you meet a new problem feel free to ask.