Open kripper opened 1 year ago
Vision Landing users/devs:
Please check this: https://discuss.ardupilot.org/t/precision-landing-with-multiple-apriltag/89911
Hi @kripper Thank you
Before proposing a project merge strategy, I would like to see the code of the "frontend" where you process the marker data sent via IPC from the backend "capture.c".
I did not do much thing in frontend, it just pack data in mavlink packet. It is located in https://github.com/chobitsfan/mavlink-udp-proxy/tree/new_main. In lastest commit. I moved to use https://github.com/chobitsfan/libcamera-apps/tree/pr_apriltag instead, because raspberry pi moved from v4l2 to libcamera.
Ok, I'll take a look.
I just finished moving your code into track_targets.cpp
in order to be able to debug the output with AR drawings.
I'm testing with my OpenGL simulator that generates and sends the video to track_targets.cpp
.
Here is a preview of two tagStandard41h12
tags placed in the simulator (I'm testing with low quality video and it works stable).
Any particular reason to use this family tag?
I'm now struggling trying to project the (x,y,z) coordinates returned by estimate_tag_pose()
on top of the camera image using the existing drawARLandingCube()
method from Vision Landing:
...
matd_t* m1 = matd_multiply(pose.R, tgt_offset);
matd_t* m2 = matd_add(m1, pose.t);
x = m2->data[0];
y = m2->data[1];
z = m2->data[2];
...
drawARLandingCube(img, m, CamParam);
I'm specifically trying to figure out how to use the same aruco::CameraParameters
with your code, which has only this:
apriltag_detection_info_t det_info = {.tagsize = 0.113, .fx = 978.0558315419056, .fy = 980.40099676993566, .cx = 644.32270873931213, .cy = 377.51661754419627};
it just pack data in mavlink packet.
Ok, that means that the FlightController receives multiple landing targets (one for each detected marker) and selects which one to use (or what to do with this redundant information)?
I believe this filtering process should be better done before sending the mavlink messages to the FC, since we have more information about the markers and their confidence levels.
BTW, while I was coding I managed to wake up @fnoop from his hibernation process. I suspect he substituted his drone for a girlfriend...but he will be following our progress. He commented about an issues with the latency which we will have to address next: https://github.com/goodrobots/vision_landing/issues/123#issuecomment-1472896461
Hi @kripper
Ok, that means that the FlightController receives multiple landing targets (one for each detected marker) and selects which one to use (or what to do with this redundant information)?
No. raspberry pi computes the landing point based on which marker it detects. There is only one landing point, raspberry pi knows offsets from marker to the landing point.
Here is a preview of two tagStandard41h12 tags placed in the simulator (I'm testing with low quality video and it works stable). Any particular reason to use this family tag?
AprilTag dev team recommend it, see https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#choosing-a-tag-family
There is only one landing point
Oh, right. I forgot you were only working with the first detected marker. I will visually check if the projected landing point from one marker is consistent with the projected landing point from the other markers. If the error is big, I was thinking in using the centroid. If not, using the first is fine.
Ok, I just found the way to use the aruco::CameraParameters
with the Apriltag.
This is needed to project the exact 3D coord of the landing position which is important for dealing with extrapolation and latency issues.
Preview:
BTW, I believe you are not using camera calibration parameters
Focal length is used, but lens distortion is not used. For raspberry pi camera and my application, it is accurate enough even without lens distortion correction.
Merge is ready. I'm doing tests before releasing.
I also started addressing the latency drift problem.
In our case, we will also have to implement the motion control on our own: https://github.com/The1only/rosettadrone/issues/132
What is your experience with the latency drift (pose estimation is never current, so whatever motion instruction you send will always have some error).
Please comment there.
I published the result of the "merge" here: https://github.com/kripper/vision-landing-2
I included your "IPC communication protocol" (the pose values you were sending to your "frontend") in apriltag-detector.cpp
so you could easily enable them and switch to use vision-landing-2 in your projects.
You could also be interested in the alternative input source "pipe-buffer" to pass raw images with less latency.
See more details in the README.
@chobitsfan, could you please declare here/in the code the licensing of this code? There is a kind of public domain declaration at the top of capture.c, but I think that might be from v4l2 project?
Hi @fnoop It is modified from https://www.kernel.org/doc/html/v4.11/media/uapi/v4l/capture.c.html
Hi @chobitsfan,
I'm the current maintainer of RosettaDrone and we are looking forward to contribute to an opensource visual based precision landing project.
I reviewed your implementation and understood everything (code is clean).
Your implementation has this pros:
Vision landing's track_targets has this pros:
I believe both projects should be merged somehow so a maintainer community can be built around. Of course, they are completely different implementations, but the final goal is exactly the same.
Before proposing a project merge strategy, I would like to see the code of the "frontend" where you process the marker data sent via IPC from the backend "capture.c".
You are probably doing there stuff like filtering out detection errors, maybe generating AR images for debugging and other stuff that is done in the equivalent vision landing python script.
About the scope:
We are also interested in doing some extrapolation to:
Anyway, I believe the merged project should just focus on returning the position of the target relative to the camera, including the error filters required to compute a robust and consistent landing target position based on all markers and maybe also considering previous computations.
The rest I mentioned above could be implemented in the flight controller.
BTW, do you know if this has been implemented?
For RosettaDrone, we will need a independent implementation (apart from the FC), so it would be ideal to use a shared library for this (a second layer separated from the flight controller and from the target tracking/capture layer).