Closed miguelriemoliveira closed 4 years ago
Hi @aaguiar96 and @eupedrosa ,
help me out here: which information exactly do we need to have for each pattern:
First question is: we should have the same parameters regardless of whether we are using a chessboard or a charuco, right?
From what I see, we use the following:
Am I missing something?
Given the above I propose that the calibrate generates, for the calibration pattern a dictionary like this:
{
# All coordinates in the pattern's local coordinate system. Since z=0 for all points, this coordinate is ommited.
# Corners
'corner': [{'idx': 0, 'x': 3, 'y': 4}, ..., {'idx': ncorners, 'x': 30, 'y': 40}]
},
# Physical outer boundaries of the calibration pattern
'frame': {
'top': [{'x': 3 , 'y': 4 }, ... , {'x': 30 , 'y': 40 }]},
'right': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]},
'bottom': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]},
'left': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]
}
# Transitions from black to white squares
'transition': {
'vertical': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]},
'horizontal': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]},
}
# Should we put here the physical corners?
}
The goal is to standarize what we have (generate) for each calibration pattern. Objective functions would use (some of ) this informaiton, and visualization must be able to plot it.
Any comments? I will try to implement this tonight.
how do we make robust to partial detections? (@eupedrosa how is this handled?)
Each corner has an ID, from 0 to N-1, where N is the number of corners. This is the same as an index number of an array. The detector gives us an array of IDs, and that same ID is used as a selector for the grid of corners, for example:
grid=np.array([0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.6]) # example of a corner grid (1D)
ids= [0,3,5]
usefull_grid = grid[ ids ]
print(usefull_grid)
It will output [0. 0.3 0.5]
Each corner has an ID, from 0 to N-1, where N is the number of corners. This is the same as an index number of an array. The detector gives us an array of IDs, and that same ID is used as a selector for the grid of corners, for examnple:
Great! One less problem.
You proposal for the calibration pattern is very verbose... Why so many "x" and "y"s ?? I know that you like dictionaries all the way ... but we have a lot of code to do conversions from this format to something usable by numpy.
What is the transition needed for? The lidars?
Are we still pursuing a discrete definition of the pattern border?
# Physical outer boundaries of the calibration pattern
'frame': {
'top': [{'x': 3 , 'y': 4 }, ... , {'x': 30 , 'y': 40 }]},
'right': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]},
'bottom': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]},
'left': [{'x': 3, 'y': 4}, ..., {'x': 30, 'y': 40}]
}
Again, using dictionaries for defining x's and y's adds extra steps when we need to use this information.
Hi @eupedrosa ,
I think the conversion is simple using list comprehension:
x = [item['x'] for item in d['frame]['top']] y = [item['y'] for item in d['frame]['top']]
and this philosophy was used elsewhere in the dataset, so it makes sense for consistency.
Concerning the storage of discrete values, it was just to maintain retrocompatibility. The 2D lidar objective function uses these.
We would have to adapt the 2D lidar objective function to use a point to line segment distance, and make all the necessary tests to check that it is working, not just assume it is. Only then we could move on to representing just the two points per line segment.
Another option is to keep the sampled points (this is not so much space), and also keep the two points per line segment. It would be something like this:
# Physical outer boundaries of the calibration pattern
'frame': {
'top': { {
pi : {'x': 4 , 'y': 7 },
pf : {'x': 44 , 'y': 37 },
sampled: [{'x': 3 , 'y': 4 }, ... , {'x': 30 , 'y': 40 }],
}
},
(...)
}
actually this last is my preferred solution since we can use whatever approach we want in the objective function, or even a combination of both. What do you think?
For the frame, I think we only need to save 4 points, somehting like
'frame': {
'corner': [ ... ],
'top': { sampled_here },
'left': {sampled_here },
}
But there is something I'm not understanding. This is a datastrucure to represent a pattern? And it will be used to draw the pattern?
Yes, it is used to represent the pattern.
And it will be used to draw the pattern also.
During optimization, it is important to have visual feedback of what the objective function is doing.
What is the transition needed for? The lidars?
yes, trying to use more information for 2d lidars ...
Hi @aaguiar96 ,
yesterday night I started but could not finish this. Will continue today and let you know once its done.
yesterday night I started but could not finish this. Will continue today and let you know once its done.
Ok @miguelriemoliveira
I'll close today the limit points
calculation. I'll compute the circle radius in a dynamic manner, using the higher distance between the 2D points. :)
OK, so I got the chessboard data creation and visualization up and running ...
I don't draw any mesh because we don't have a mesh file for the chessboard ... any volunteers : - ) ... someone who is very proeficient with blender ?
Now I will expand for the charuco.
@aaguiar96 will let you know once its working, but I am not sure I will do it now since I have a meeting at 19h.
I'll close today the limit points calculation. I'll compute the circle radius in a dynamic manner, using the higher distance between the 2D points. :)
good idea!
For the charucos the reference system is a bit different (from opencv):
Hum ... detection from our code does this
@eupedrosa , @aaguiar96 where is the origin of your mesh?
Hum ... detection from our code does this
That means that the size of the mesh is wrong right?...
I think so, but before that, we need to agree on an origin for the mesh ...
For the charucos the reference system is a bit different (from opencv):
Hum ... detection from our code does this
You are looking at two different things. The mesh is correct. Read the manual!
The function drawAxis
draws an axis considering the full pattern. However, the detection is only the coorners. And that is what we are using as origin.
The function
drawAxis
draws an axis considering the full pattern. However, the detection is only the coorners. And that is what we are using as origin.
Still, the drawn corners do not match with the mesh corners.
Still, the drawn corners do not match with the mesh corners.
What image are you refering to, @aaguiar96? I see the corners drawn as expected...
Do not forget that RViz centers the cubes in the coordinates that you provide.
Do not forget, that RViz centers the cubes in the coordinates that you provide.
Right, I was missing that! :)
If the objective is to overlap the squares you have add to each coordinate half the size of the square.
Hi again,
sorry, I don't get it. The points are drawn after the computed the corner coordinates (I may have done something wrong there, but I don't think so)
You are looking at two different things. The mesh is correct. Read the manual!
@eupedrosa, which manual do you refer to? Also, the sphere centers and the corners are noticeably not in the same location. In addition, not even the origin is centered on a corner (see where the z axis leaves the board). See the image:
Thus, either:
1) the corner coordinates I compute are wrong or; 2) the mesh is wrong
How can you be so sure the mesh is correct? I don't get it
Here's my code to draw the corners:
and the code to visualize:
Can you detect something wrong?
My point is: these points that I draw are computed from the pattern description. I should not be adjusting them to fit the mesh. Either the points are wrongly calculated or the mesh is not ok.
And we have to find which. You agree?
If the objective is to overlap the squares you have add to each coordinate half the size of the square.
Sorry again, I also don't understand this. What do you mean by overlap the squares? Center on the chessboard corners?
Also in meshlab
@eupedrosa , do you still say the mesh is correct?
and by the way @eupedrosa , your charuco_5x5.dae has the origin in the proper place
It is a bit late now, so I don't expect to hear from you until morning. This is my opinion so far: 1 Right now I have the drawing of the pattern's points for the charuco functional
Tell me what you think.
Is that the mesh created by @aaguiar96? I was thinking it was the mesh we used for the hand-eye.. my fault.. If that's the case, then yeh, the texture in the mesh can be wrong or the origin of the mesh can also be wrong.
Sorry, @aaguiar96 and @miguelriemoliveira.
I was defending the mesh because in the calibrations that we have, in hand-eye, the mesh was correct.
Nonetheless, it looks like you are investing a lot of time in this..
The mesh is great for visual fidelity, but it is a pain in the ass to get it right. And considering that not everybody uses the same calibration pattern, I am afraid we are only adding complexity to our work. Maybe we should just use rviz markers and generalize to everybody..
Hi @eupedrosa ,
Yes, this is @aaguiar96 's mesh.
ok, so you agree it is not correct. Great. @aaguiar96 , if you also agree with the reasoning above, that means you have to correct the mesh. From my side, I see two things wrong:
Nonetheless, it looks like you are investing a lot of time in this..
Of course, this is necesary step to get the velodyne calibration up and running.
Hi @aaguiar96 ,
something else: Can you verify is the green and red dimensions are equal? Because we assume they are in the code ...
Ok @miguelriemoliveira
What a pro in Blender I am, ahah! :-) Tomorrow morning I'll finish this up.
Hi @aaguiar96 ,
I could have solved your problem. In meshlab I can move the object and was able to get this
This would have been enough if the size of the squares is correct but it is not.
Notice how the spheres are increasingly further away from the corners as we move away from the origin.
Nice, thanks @miguelriemoliveira
So, I'll check all the sizes and update the mesh. I let you know when I have some news.
Hi @aaguiar96 ,
No problem. Don't forget, since you are buildin a new mesh, might as well as check this part while you are at it (check the post above):
something else: Can you verify is the green and red dimensions are equal? Because we assume they are in the code ...
The dimensions:
The borders have different sizes...
@eupedrosa in your case was the size of the margins equal?
The draw
function (link) from cv::charuco
only accepts one argument for the margin, which means that it considers that both margins are equal right?
It actually computes the image size like this:
imageSize.width = squaresX * squareLength + 2 * margins;
imageSize.height = squaresY * squareLength + 2 * margins;
Yes, they are equal.
You can always change the source code to your needs.
I'll try to hardcode it then...
@miguelriemoliveira, this is only for visualization purposes, right? If so, can we try first to set both margins to 0.035m (5mm of error for each) and see if the error is visible on rviz? Otherwise, I'll have to change OpenCV source code...
I can also try to process the image before generating it, and try to set the margins to the correct size.
Hi @aaguiar96 ,
It is not for visualization purposes. The margin is used to generate the frame points (the old limit points) which are used by the objective function.
The mesh that is drawn is, as you say, just for visualization. But if we can make it the correct dimensions it would be better.
The charuco detection requires a margin, but I am almost sure that the number you give the function is a "minimum border size". As such, you can just use the minimum of the x and y coordinate.
I would try searching hard for other alternatives before going about changing the opencv code. Also, as said above, in this case I don't think you need it. The detection is working fine, right?
Hum... Ok
The charuco detection requires a margin, but I am almost sure that the number you give the function is a "minimum border size". As such, you can just use the minimum of the x and y coordinate.
Setting the margin to the minimum size, this is the image I obtain
The margins are equal...
I am sorry, I did not understand that the function you talked about was to generate an image. You could do some math and add a white band of the correct size on the left and right (or top bottom, depending on which is the smallest border). Then, you would use this wider image as texture ...
I am sorry, I did not understand that the function you talked about was to generate an image. You could do some math and add a white band of the correct size on the left and right (or top bottom, depending on which is the smallest border). Then, you would use this wider image as texture ...
Yes, I thought in the same solution! I'm on it :)
The dimensions:
The borders have different sizes...
Hi @aaguiar96 ,
thanks for this. To solve it I have created #172 and will work on it.
No problem @miguelriemoliveira
Problem solved. I'll remake the mesh now
I think this is it:
I'll commit the mesh. @miguelriemoliveira can you try it out?
That is not it.. The origin should be the corner inside...
Hello, I moved the origin to be in the inside corner, as @eupedrosa said.
It works:
Thanks for the help guys. This issue is done! I will erase all other mehes from github and keep just this last one with the original name.
Great @miguelriemoliveira
So, now I can finish the cost function, right?
Yes, that's the next step. Do you need to talk about it? You know where to go?
Now I have to create a new set of residuals computed by the pairwise difference between the closest limit points of the pattern and the cloud right?
Must find a way to draw both correctly.
Also, line and point markers should be added overlaying the textured mesh, since those are more useful for assessing the objective function.