Closed nazneen-miko closed 2 years ago
Could you please post some pictures of said back light? I would also be interested in images for the other walls because those don't look that great either. My first guess would be a bad intrinsic calibration. Please do your intrinsic calibration with Kalibr, then post the reprojection error plot. Also please state what camera and lens you are using. Is it a fisheye or a standard pinhole (rad-tan distortion) lens?
Hey I am using global shutter camera at 640X480 resolution with 4mm lens and this is my calibration file using kalibr ( I am getting high reprojection error of 0.4 for this camera i am not sure why is the error so high) report-imucam-tiko_calib-day2-imu-2.pdf
this image fails to detect tags
The calibration looks ok. That the resolution of the camera is relatively low doesn't really help but I would have expected better performance than that, especially against the wall where they are detected. The tags are well placed and very dense. Double check that you have the tag size correctly specified in your tagslam.yaml, also measure the tag size (physically) one more time. Can you send me a bag with the tag detections (the compressed images would of course be better if you have enough space on a web drive to send it to me)? Also your tagslam.yaml, cameras.yaml, camera_poses.yaml. About the tags not detecting against backlight: try using the umich detector. You need to set the detector to 1 in apriltag_detector.launch. See documentation on github for the apriltag detector. But then you need to reprint your tags. Right now you have a 2-bit border, the umich detector does not recognize that so you need to print the tags with 1-bit wide border. There is a script for tag generation in the tagslam src directory. Just try for 1 or 2 tags to see if it makes a difference. The Umich detector is much more aggressive in detecting tags than the MIT detector (and from my experience false positives as well, but that's another story).
I have measured the size of the tag it is 0.16m physically and I created tag using command rosrun tagslam make_tag.py --nx 1 --ny 1 --marginx 0.00 --marginy 0.00 --tsize 0.16 --tspace 0.0 --startid 1 --tfam t36h11 --borderbits 2
This is my config files for tagslam config-files.zip
Kindly share your email I will give you access for the bag file for tagslam where the tags are not detecting
bernd.pfrommer@gmail.com
I have given access on drive link
Just did some tests. UMich detector way outperforms in this situation!
Wow amazing seems like good detector to explore but this does not work with border bit 2 right? Also, what about the drift in the poses which I mentioned earlier of not getting correct rectangular path for rectangle room and getting diagonal path instead how to resolve this?
Obviously the best way to get the walls straight is to have such good and high-res images that they come out straight by themselves :) Absent that you can use the "coordinate_measurements" keyword to constrain tags to be on a plane. https://berndpfrommer.github.io/tagslam_web/measurements/
Not sure why you are getting the walls being so off from parallel. Must've been a different bag from what you sent to me. If I add some constraints for the long walls the results get somewhat better (see attached images for before-and-after). You would get further improvement if you attached tags on the side where there are gaps. That way you could achieve loop closure. I also believe a lot of the tags that now go undetected (not just due to exposure, but also because too small) will be detected by the UMich detector.
Yes agreed but I have hardware limitations hence I cannot increase the resolution of image. Also, i was talking about the drift issue which you said earlier may be bcoz of calibration but u mentioned that calibration is correct (this issue as u can see in the image of getting not right angles at corner)
Ok understood so i will try the other detector you mentioned or else i will add more tags to achieve loop closure that will solve the issue i suppose
Adding more tags will get you loop closure but from what I can tell the UMich detector will give you much better detection. I assume the project will not end with a mapping run. What about later when the robot is moving in the pre-mapped environment. You will suffer from poor detection every step of the way, even if you have odom to tide you over between tag detections.
So you suggest that for next iteration of dataset i will use border bit 1 and then add more tags and use Umich detector which will give optimum result. And i cannot rely on odom as encoders won't be accurate due to uneven floor surface
All correct, except for the last. If done right, even poor odom will be better than no odom. So long as the odom is not dead wrong, it will help. You can reduce the importance of odom by bumping it's noise parameter. TagSLAM will then rely on odom pretty much only when it sees no tags.
Ok sure i will add more noise in linear x and y as it is relying on encoders for that and keep it as input to the algorithm of tagslam as in current scenario if there is no tags detected it just stops working
How can we see the trajectory of the camera as with current example rviz setting its not visible and also is there a way to store the camera poses in world frame in a file?
1) You are not seeing the rviz positions because your root tag is late in the sequence. TagSLAM cannot do any state estimation until it sees the root tag. Solution: choose a root tag close to the start of your path, or start your path at the root tag such that it's the first one the camera sees. BTW when you work with rviz make sure you have use_sim_time set to true when you play back from a bag. Play back with rosbag play --clock, and do "rosparam set use_sim_time true" BEFORE you start rviz or tagslam. 2) The camera pose can be captured multiple ways. You can specify the "outbag" parameter, then TagSLAM will write odometry and transforms to the named bag file. Or you can capture the odometry messages with rostopic echo -p. I believe it's "-p" that prints out the message in csv format, please check rostopic docs.
sure thanks i will try this out
So I tried and I am getting much better results after adding constraint but i am still getting 14cm error in measurement of odom data in few tags in poses.yaml generation by ros service how can I improve that ?
I can't quite make sense of your phrasing: "14cm error in measurement of odom data in few tags in poses.yaml" Are you talking about odom (the camera pose) or tag poses? If it's tag poses: make sure you get good overlap when you do your mapping run. Having multiple tags in view helps a lot. Using the UMich detector will also help for this. If you are talking odom, then you need to look at the images (and in particular the "disp" debug image output from the detector) for the cases where your camera pose is poor: how many tags are seen? Any of them occluded? You cannot localize off of a single tag unless it fills the whole field of view. The wider the tags are spread out across the image the better your pose estimation will be.
I was talking about tag poses and will definitely try this out ! thanks again
Hey I was trying to use tagslam for getting the camera trajectory. In our current environment there is a lot of ambient lighting due to window and due to auto exposure setting in the camera we are not able to detect the tags when we face the window. Do you have any suggestion for this issue how to resolve this or detect tags with not good exposure like over or under exposed images
Also the trajectory i am getting is not correct rectangle it is always going diagonal as shown in the image
Is this happening because of auto exposure as this wall is where the window is or some other issue ?