xyjbaal / FPCC

MIT License
44 stars 11 forks source link

Collect simulation dataset #2

Closed waiyc closed 2 years ago

waiyc commented 2 years ago

Hi,

In the paper, you have mentioned that the training data from XA dataset is collected in simulation. Is it possible that you can provide more information on how you collect the point cloud data with segmented index? (is the data generation source code available on github?)

Thanks

xyjbaal commented 2 years ago

I generated the simulation scene by the bullet engine (c++). the source is available at https://github.com/naoya-chiba/visibleBinSceneMaker. C++ is faster, more stable and more accurate, but python is easier to compile

I consider uploading a simulation code (python version), but it will take some time. I used the C++ version in my paper from the link above.

After the simulation of each scene is over, you will get the pose matrix (4X4) of each object.

  1. The first pose matrix is multiplied by the model point cloud of the object.
  2. Attach an instance index (1) to each point.
  3. The second pose matrix is multiplied by the model point cloud of the object.
  4. Attach an instance index (2) to the new point. .... In this way, a training data set can be generated. PS: You can also calculate the center score in this process.
xyjbaal commented 2 years ago

If possible, I can send you an imperfect simulation code (python version). After you modify it, please upload it to github, and then give me a link.

waiyc commented 2 years ago

sure, i can try to improve the python version simulation

xyjbaal commented 2 years ago

Thank you for your help. I upload the code to

https://drive.google.com/file/d/1pEo1ILlPiefHBRHgALo1NMb0updPzjtB/view?usp=sharing


  1. use creat_scene.py to generate scene. you can get Rotation and Translation of each object.
  2. use creat_matrix_by_RT.py to get 4X4 matrix.
  3. use make_scene_by_matrix.py to reconstructe scene with instance label and center score.
  4. use remove_hiden_point.py to remove the points that camera can't capture.

These files are for students to practice using the bullet physics engine, so the code is a little bad. If you can do some integration and modify the function and parameter naming to make it easy to read, it would be very grateful.

waiyc commented 2 years ago

Thanks. I will give you an update once I complete the development.

waiyc commented 2 years ago

Hi,

For generating center score, may I know how to define the max_r correctly for each item? It is a bit confusing to me, because max_r in FPCC algorithm is # gear: 0.08 ring: 0.1 but in for data generation code the max distance is another value.

Is it based on model's maximum length? If yes, then the maximum length for gear shaft model would be around 0.4 based on the model size?

xyjbaal commented 2 years ago

For calculating the center score, max_r is about half of the maximum length of the object. ( max_r = the maximum distance from geometric center point to the farthest point of the object).

You are right. The maximum length for gear shaft model is around 0.4, so the max_r should be 0.2. But because the shape of the gear is a long strip, I used a relatively small r in our code. Set max_r to 0.08 (for gear), you can get a better result than our reported results in the paper.

For better segmentation results, you can adjust R according to your parts to make the distribution of the center score more uniform, which is convenient for network learning.

Sorry to confusing you

waiyc commented 2 years ago

Thank you for the explanation. I will use 0.08 for the max_r in the data generation for now.

waiyc commented 2 years ago

Hi @xyjbaal ,

I have uploaded the dataset generation environment to https://github.com/waiyc/Bin-Picking-Dataset-Generation. Please take a look and thank you for providing me your initial code

xyjbaal commented 2 years ago

Thank you very much. I browsed your page, you are so great. LOL. I added the link you provided on the project page of FPCC.

wish you great success in your studies.

waiyc commented 2 years ago

Thank you. Looking forward for your next paper :) All the best ! :+1: