Open Archimonde666 opened 2 years ago
ATM all the positionning is done in the 2D image plane without 3D positionning in the real world. For instance the measure m_distance is only a measure of the 2D distance in the image bewteen the low midpoint of the camera view and the ARUCO-CODE, which is not at all a estimation of the real distance and is expressed in pixels.
The "size" of the ARUCO-CODE relative to the target windows is defined in the "offset" parameter and is expressed as fraction of the "seen width" of the ARUCO-CODE in pixels
With measuring of the real size ARUCO-CODE, and with the pixel size of the detected ARUCO-CODE in the software, we are able to compute : 1) distance value bewteen the drone and the ARUCO-CODE. 2) angle values of the path bewteen the drone and the ARUCO-CODE. 3) angle values of the ARUCO-CODE itselft (squareness of the pixel group) and relative to the drone orientation.
With a static drone : we are able to position the ARUCO-CODE in 3D space from the intersection of the sphere (given by distance) and the infinite line given by the angle value of the path.
With an initial position of the drone known (origin definition there), and with commands issued in position [eg move(x,y,z)] we are able to compute and store the position of the drone itself along the motion.
=> we can compute at every position of the drone the relative position of the ARUCO-CODE (with it's orientation, both the path and his itselft orientation) => we can precisely retrieve the position of the gate => we can compute an optimal track and follow it.
To achieve best possible precision, this 3 parameters should be stored with a weight each time we detect an ARUCO-CODE, the weight being representative of the confidence we have in that measurement. This weight is determined with an evaluation of the error of the measure on the go. => it is needed to calculate and evaluate a complete 3D grid of the error made for the 3 parameters independently. Eg : a far ARUCO-CODE will have a distance value quite imprecise, the size being determined with an uncertainty of at least +- 1 pixel, it is easy to understand that this can lead to error when the ARUCO-CODE is percieved only a few pixel wide. For the same reason, orientation of the ARUCO-CODE itself will be imprecise aswell. Therefore the measure of the angle of the path bewteen the drone and the ARUCO-CODE is very good at that point. Eg2 : a closer ARUCO code will have the opposite effect : the angle of the path will be imprecise (because a small error in the drone position in the virtual space will have an important impact on the path's angle calculation result. Therefore the orientation and the distance measure will be quite precise. Eg3 : Carefull when the ARUCO-CODE is at the very side of the picture, video captures on drone often come with a large fisheye problem at the sides resulting in more error being made regarding the orientation of the ARUCO-CODE itself.
All of the errors (these examples are non exhaustive) have to be evaluated so we are able to know which confidence we have in the measure/calculation we're making at the time t.
=> afterward we can create a super smart function, with all these data together as input, which will return the percieved position of each point in the prescribed path relative to the percevied position of the drone at time t.