Open TheodoroCardoso opened 1 year ago
1.Can you share sample point cloud? 2.This implementation of ppf not support scaled model, it's recommand to have same size for model and scene。 ppf can torelate difference less than sample step 3.For "brick" object like a cube, It better to extra edge point 4.I have update my profile , you can find the email.
Given that the scene contains a single brick and maybe some cropping leftovers of its neighboring bricks, what would you say is the minimum sample step in % of object size for robust results?
Would you mind pointing out where to change the sample step and increase edge points?
Thanks for providing your email, I'll reach out and share samples.
@SurfaceMan Thanks for your excellent work,it helps me a lot. Although it performs robust,I met wrong result which doesn’t have one correct pose. I test some data finding that the difference such as model size or scene size will have a good influence in the result. The scene have may noises,I try extract partial to test getting different result.So I hope you can provide some suggestions on parameter tuning,I will email my data to you,look forward to your reply.
@SurfaceMan Thanks for your excellent work,it helps me a lot. Although it performs robust,I met wrong result which doesn’t have one correct pose. I test some data finding that the difference such as model size or scene size will have a good influence in the result. The scene have may noises,I try extract partial to test getting different result.So I hope you can provide some suggestions on parameter tuning,I will email my data to you,look forward to your reply.
1.keep model unique feature 2.set scene view point to compute normal with right direction
output message:
model remove nan cost(ms): 2 model kdtree cost(ms): 285 model sample1 cost(ms): 10 model sample2 cost(ms): 13 model point size:1727909 model sample step:0.00266758 model sampled point size:1026 model resampled step:0.00106703 model resampled point size:6283 model normalize normal cost(ms): 1 model ppf cost(ms): 622 train model cost(ms): 976 scene remove nan cost(ms): 1 scene box:-0.272075 -0.161827 0.442666<--->0.199008 0.30357 1.57133 scene kdtree cost(ms): 173 scene sample1 cost(ms): 16 scene compute normal cost(ms): 44 scene sample2 cost(ms): 3 scene sample step:0.00266758 scene sampled point size:15239 scene keypoint sample step:0.00843562 scene keypoint point size:1304 scene ppf cost(ms): 371 after cluster has items: 19 icp prepare cost(ms): 58 sparsePoseRefinement score:0.432749 densePoseRefinement score:0.441031 icp cost(ms): 14 after icp has items: 1 match scene cost(ms): 690 0.966385 0.242857 -0.0843864 -0.109638 0.180617 -0.874876 -0.449411 -0.00240532 -0.18297 0.419062 -0.889331 0.502913 0 0 0 1 0.441031
@SurfaceMan Thanks for your reply, I tried to keep model unique, but also get wrong result.More details have been emailed to you.
@XiaXingLuo
@XiaXingLuo
1. model --- means template(search target), not means general 3d model. I mean cut model(template) plane part 2. right direction of normal --- means normal of model(template) and scene have same direction, all toward outside or inside 3. viewpoint --- it always be camera pose. it's useful to compute the direction of point: let A as a point in scene, let O as view point, we can compute normal(N) of A with it's neighbors(PCA), but which direction is true N or -N ? we made it by let V as a vector from A to O, if V dot product with N greater than 0(has same direction) we use N else -N. 4. sampleDistanceRel --- sample step relate to the diameter of model(template), smaller value keep more details but also cost more time to compute
Thanks, It seems that I should more concentrate on view point and samplingDistanceRel when keeping model unique feature.It does work,you are so kind. Have a good day!
Hi @SurfaceMan,
Your work is great. I'm trying to run it on a fairly simple brick-like object.
The scene contains the point cloud of a single real "brick" object. There are multiple scenes with the brick from different angles. The real brick varies in size to up to 10% larger than the brick model I'm using for matching.
Unfortunately, I don't get any matches. Would you mind pointing me in the right direction, please? Which parameters should I pay special attention to?
Do you believe that it's failing because the real objects are slightly different in size and shape from the perfect brick model used?
Finally, would you mind sharing your email or emailing me at cardoso.theodoro@gmail.com? We're looking for consultancy in PPF...