Closed miguelriemoliveira closed 4 years ago
Hi @aaguiar96 ,
the code is prepared for integration. Take a look at what I changed in https://github.com/lardemua/AtlasCarCalibration/commit/0f0ffc2ffbc842775f4b348c054351fa6583db2c
in particular the files: data_collector_and_labeler.py interactive_data_labeler.py
My comments will guide you I think, but we can alsways talk if you get stuck.
To run, you must do:
roslaunch interactive_calibration agrob_calibration.launch bag:=/home/mike/bagfiles/agrob/agrob_2020-03-12-10-08-49_0.bag
and then to label:
rosrun interactive_calibration collect_and_label_data.py -o ~/datasets/test -s .5 -c /home/mike/catkin_ws/src/AtlasCarCalibration/interacve_calibration/calibrations/agrob/config.json
Oh, and don't forget to install new dependencies
roscd interactive_calibration && pip install -r requirements.txt
(@eupedrosa 's fault :))
Ok @miguelriemoliveira
Thank you. I let you know when I have some news! :)
Hi @aaguiar96 ,
any news? Did you find some time to work on this. If you need help tell me, I am eager to work on this stuff :)
Hi @miguelriemoliveira
I'm sorry, I had a lot of work these days. I will start working on this issue tomorrow morning! Tomorrow I hope to have some news. :)
Ok Miguel, I'm back! Sorry about the delay... :)
So, I think I was able to run the whole system. Let me know if I understand:
/velodyne_points/labelled
where I must publish the velodyne chessboard points.Is that it? So, should I try to integrate my labeller into interactive_data_labeller
?
I @aaguiar96 ,
no problem, we work on this when we have time.
the interactive_data_labeler is where the labels for the different sensors are generated;
yes
the data_colector_and_labeler collects the data, use interactive_data_labeler and publishes the labels;
yes, but publishing the labels is a just a bonus feature for better debugging, the real goal is to write the json file which will be the input for the calibration procedure
you already created one topic /velodyne_points/labelled where I must publish the velodyne chessboard points. Is that it? So, should I try to integrate my labeller into interactive_data_labeller?
Yes, that's it. The points from the velodyne point cloud that your code thinks belong to the chessboard should be published (an rviz marker with spheres, perhaps). In addition, as said before, you must add a dictionary of labels with the indexes of the points in the point cloud that belong to the labels.
Check what is done for the 2D lidar (LaserScan msg type), there are a lot of communalities.
If you need help let me know.
Miguel
I @miguelriemoliveira
I think I've got the code almost (missing the json file generation) integrated (see .gif bellow)
However, most of the times the system does not work and gets stuck at this issue:
tf2.ExtrapolationException: Lookup would require extrapolation into the future. Requested time 1584007792.781102419 but the latest data is at time 1584007792.730567694, when looking up transform from frame [base_link] to frame [rear_left_wheel_link]
I tried to add a listener.waitForTransform
but it did not solve the issue...
Looks good! I would recomend transparent spheres as the marker types, because with them you can still see the point cloud data poins bellow. Check the 2D lidar example.
Concerning the tf exception, that's strange, can you upload the tf tree?
Also, show us a gif where you move the seed marker (the rviz interactive marker around to should that it starts selecting other objects). Is it possible?
I would recomend transparent spheres as the marker types, because with them you can still see the point cloud data poins bellow. Check the 2D lidar example.
Ok Miguel, I'll change it.
Concerning the tf exception, that's strange, can you upload the tf tree?
Also, show us a gif where you move the seed marker (the rviz interactive marker around to should that it starts selecting other objects). Is it possible?
Yes sure. Here:
I would recomend transparent spheres as the marker types, because with them you can still see the point cloud data poins bellow.
@miguelriemoliveira is this what you were talking about, or do you want me to decrease the alpha
parameter of the labeled points?
Hi, I think that's ok.
Thanks, Miguel
On Thu, 30 Apr 2020 at 13:43, André Aguiar notifications@github.com wrote:
I would recomend transparent spheres as the marker types, because with them you can still see the point cloud data poins bellow.
@miguelriemoliveira https://github.com/miguelriemoliveira is this what you were talking about, or do you want me to decrease the alpha parameter of the labeled points?
[image: output] https://user-images.githubusercontent.com/35901587/80711584-81073380-8ae8-11ea-891e-ee27828bf87a.gif
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/AtlasCarCalibration/issues/141#issuecomment-621809608, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVQ5UZXMLTIB7QZUIBDRPFW6TANCNFSM4MQDCW2A .
Hi, I think that's ok.
Ok @miguelriemoliveira. Regarding the creation of the .json file. Two things:
I read the PointCloud2
msg into a numpy array
since it was the more efficient way of doing it that I found, using ros_numpy
. So, I consider that the indexes to save on the dictionary are directly the indexes of that array. Is that ok? This might depend on the way that the .json file is parsed...
When I try to save an annotation, I get the following warning:
[WARN] [1588253799.820391, 1584007824.794265]: Max duration between msgs in collection is 9057710.36064. Not saving collection.
Is this related to the error that I mentioned before?
Hi @aaguiar96 ,
I read the PointCloud2 msg into a numpy array since it was the more efficient way of doing it that I found, using ros_numpy. So, I consider that the indexes to save on the dictionary are directly the indexes of that array. Is that ok? This might depend on the way that the .json file is parsed...
I would say the labels can be the indexes of the array. This convention will be used in the optimization code, in particular the cost function portion you are going to write. So, if you use the same convention on both ends its ok.
When I try to save an annotation, I get the following warning:
This is a protection we did to make sure we have a reasonably synchronized data from sensors. It is the max delta allowed between all the sensor messages. For now you can solve it in the config.json file by altering the max_duration_between_msgs parameter.
another option is to reduce the bag file playback ratio ...
BTW, can you measure how long your labelling procedure takes? It seems quite fast ...
BTW, can you measure how long your labelling procedure takes? It seems quite fast ...
By labelling you mean the scan callback frame rate?
@miguelriemoliveira Do you want to meet this afternoon to close this issue and start thinking on the optimization procedure?
Hi @aaguiar96 ,
I mean the time your code takes from the moment a message is received until you produce the labels for the point cloud.
We can meet from 14h20 onward. Just call me and let me know when you are free.
I mean the time your code takes from the moment a message is received until you produce the labels for the point cloud.
Ok, I'll measure it. :)
We can meet from 14h20 onward. Just call me and let me know when you are free.
Nice, I call you then. See you soon! :)
@miguelriemoliveira the index issue is solved (see last commit).
You can try it out! :)
Great! We're almost there.
Here's the first dataset agrob_first_test.zip
Hi @aaguiar96 ,
had to edit your part of the objective function because it was crashing.
Added a comment, take a look.
Fixed. It should be working now.
Thanks @miguelriemoliveira :)
Hi @aaguiar96 ,
this is complete no? I mean, now we are working on #149 right? The data collection is working or am I missing something?
Yes @miguelriemoliveira
I'm closing this issue.
After #139 is complete we need to integrate that feature into the data collector and labeller.
I will first prepare the stuff and then @aaguiar96 will do the integration.