RobotLocomotion / LabelFusion

LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes
http://labelfusion.csail.mit.edu
Other
374 stars 95 forks source link

Transformation from object-mesh .obj to object on _color_labels.png #56

Open mateoKutnjak opened 5 years ago

mateoKutnjak commented 5 years ago

I created results with Labelfusion (rgb, depth, label, color_label, pose.yml) and I need transformation from .obj file of ycb-like object to object which is present on color_labels.png pictures in images folder after running labelfusion.

Does transforms.yaml and reconstruction_result.yaml have any impact on that?

I tried to use quaternions from poses.yaml file and transfer them to DCM matrix but rotation is not correct.

I would be thankful if you could provide me with path how to transform pose.yaml file pose content to 3d bounding box for plotting on RGB image.

Thanks

mateoKutnjak commented 5 years ago

The issue was that LabelFusion represented the rotation in "poses.yaml" file of a frame with 90 degrees rotation on x axis. It read .obj file and rotate it by 90 degrees. Change can be seen by comparing .obj object axis and axis shown when selecting points by "run_alignment_tool".

TrinhNC commented 4 years ago

I just checked and for me it is rotated by -90° along x-axis. The quaternion is in format: [w,x,y,z], right?

mateoKutnjak commented 4 years ago

I cannot remember anymore.

Because of result that were not satisfactory enough for my project, I switched to NDDS. With LabelFusion I had to make small chunks of videos. Longer the video got, more unstable the pointcloud was and object model was not glued correctly enough. Mistake was some 0.5 centimeters. Also images were not different enough and network I was using was overfitting regularly.

TrinhNC commented 4 years ago

I'm using NDDS too. My task is 6D pose estimation. The problem is that when I use synthetic data only, my network is not robust enough to estimate the real data.

mateoKutnjak commented 4 years ago

Im using a model made in Blender and I do some translation and rotation of an object in NDDS in front of a camera. Beside that, I am randomly changing color of an object slightly and some of material characteristic.

Network I am using is DenseFusion and I can say it really works well with synthetic data. You should try that

TrinhNC commented 4 years ago

Oh really, I'm using DenseFusion too. @mateoKutnjak how can I contact you?

mateoKutnjak commented 4 years ago

I have created dummy repo and sent you an invite

hpf9017 commented 4 years ago

@TrinhTUHH @mateoKutnjak I also want to use DenseFusion,now I have already gotten my own training data with Labelfusion ant try to train the DenseFusion model. do you know how to use the labelfusion data, which are the images created by run_create_data -d, in densefusion ? thank you very much!

mateoKutnjak commented 4 years ago

LabelFusion should provide you with the depth image, RGB image and mask label for every frame of your input data to LabelFusion. You may have to convert depth to format suitable for DenseFusion.

hpf9017 commented 4 years ago

@mateoKutnjak so there are two kinds of datasets in DenseFusion, ycb and linemod, so I need to change the LabelFusion data the same as one of them ?

mateoKutnjak commented 4 years ago

You have to convert data format from LabelFusion format to DenseFusion format. I changed it to look like Linemod dataset. It includes making of directory hierarchy from DenseFusion repository and converting depth images to be like Linemod depth images. Also change your mask labels format if needed.

I suggest trying DenseFusion with linemod dataset first. Inspect its data formats and write python script for conversion from LabelFusion format to Linemod format.

tiexuedanxin commented 4 years ago

sorry to bother you, I have made some datasets with the help of labelfusion, but the datasets what i have made is inaccuracy, I can't find the reason, Could you give me some advice on how to improve the accuracy. thank you for advance.

KatharinaSchmidt commented 3 years ago

@mateoKutnjak Can you provide the code for training DenseFusion with NDDS? I also generated a synthetic dataset with NDDS and want to train a pose estimation network with rgb and depth-images now. Would be great if you could make the code public in an open repository.