Closed aaguiar96 closed 4 years ago
Hi André,
I do not have time to try right now (only during the weekend), but I suspect this has something to do with you not checking whether the z coordinate of the point (when represented in the camera's reference frame) is > 0. Only those should be colored.
Take a look at this paper
http://www.cvc.uab.es/~asappa/publications/J__Elsevier_IF_Vol_24_July_2015_pp_108-121.pdf
fig 4, I think that could be your problem.
Regards, Miguel
On Tue, 13 Oct 2020 at 22:00, André Aguiar notifications@github.com wrote:
I @miguelriemoliveira https://github.com/miguelriemoliveira and @eupedrosa https://github.com/eupedrosa
I created the script to colorize the point cloud. The result is a bit strange.
I checked the projection of the point into the image and they are correct. However, the colorizing process does not seem ok... The points behind the velodyne are also mapped to the image.
Here:
[image: rviz_screenshot_2020_10_13-21_56_42] https://user-images.githubusercontent.com/35901587/95915650-5d3a6000-0d9f-11eb-8cfd-906b7d5a6ed5.png
If you want to try:
rosrun atom_evaluation colorize_pointcloud.py -json /home/andreaguiar/Documents/datasets/train_dataset/atom_calibration_b.json -ls vlp16 -cs left_camera
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/237#issuecomment-708006014, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVSP4774ROZ4PHP6TI3SKS5WFANCNFSM4SNBLXSQ .
Ok @miguelriemoliveira thanks!
@eupedrosa can you try it and see the result? I don't find it very appealing visually...
I will try to test it saturday. Sorry for the delay, I have 23 hours of classes this week : - (
Hi @miguelriemoliveira! No problem, we have time. I'll continue writing today!
@eupedrosa can you try it and see the result? I don't find it very appealing visually...
And I think it will never be. The velodyne has a considerable gap between horizontal scans and that makes it look bad. Did you already try @miguelriemoliveira suggestion?
And I think it will never be. The velodyne has a considerable gap between horizontal scans and that makes it look bad.
So, but is this still interesting for the paper?...
Did you already try @miguelriemoliveira suggestion?
Not yet.
It is NOT interesting for the paper.
Ok, so I'll close the issue. I was just working on this because we agreed in the previous meeting that we'll put these colored point clouds in the paper...
Can I close it?
If it is hard to get any kind of information from the colorization, then it is not worth the time. So yes, close it.
Hi guys, I am late as always these past weeks : - (.
One suggestion is to try it the other way arround. Project the lidar points onto the image and show the points on the image.
Take a look at this image
http://www-personal.acfr.usyd.edu.au/akas9185/AutoCalib/AutoLaserCamDoc/index.html
Hi @miguelriemoliveira, thanks for the hint.
I substituted the colorizing script by a generic projection script in atom_evaluation
.
Usage:
rosrun atom_evaluation point_cloud_to_image.py -train_json /home/andreaguiar/Documents/datasets/train_dataset/atom_calibration_b.json -test_json /home/andreaguiar/Documents/datasets/test_dataset/data_collected.json -ls vlp16 -cs left_camera -si
Result (the colormap represents depth):
These ones are great for the paper right?
Hi @aaguiar96 ,
yes, the picture looks nice. We can see changes in color (abrupt changes in the measured depth) in the lidar jump edges. Thats a great image for the paper.
I think the color of hte lidar points could be improved ... which colormap are you using?
I think it can be improved:
I am using this one:
if show_images:
colors = cm.tab20b(
np.linspace(0, 1, 20 * 4)) # we consider a maximum of 20 meters with a discretization of 0.25 meters
You should find the minimum and maximum depth that are visible in the image. Then use that range to improve the colour gamut.
You should find the minimum and maximum depth that are visible in the image. Then use that range to improve the colour gamut.
Ok, good idea. I can add also more resolution.
Hi @aaguiar96 ,
another thing, for this case in particular we should use a progressive colormap, meaning that the color will change progressively. That way, large jumps in color will denote large jumps in depth. And that will really be our "evaluation methodology or visual inspection", to check if large color jumps are aligned with changes in the scene geometry as viewed by the camera.
tab20b is not progressive, take a look: https://stackoverflow.com/questions/43938425/matplotlib-change-colormap-tab20-to-have-three-colors
all of the first 14 colormaps here are progressive, you can use any one of these:
I @miguelriemoliveira and @eupedrosa
I created the script to colorize the point cloud. The result is a bit strange.
I checked the projection of the point into the image and they are correct. However, the colorizing process does not seem ok... The points behind the velodyne are also mapped to the image.
Here (white points are the ones that were not mapped into the image):
If you want to try: