lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
250 stars 26 forks source link

Add depth component to ATOM's framework #323

Closed danifpdra closed 2 years ago

miguelriemoliveira commented 2 years ago

Hi @danifpdra ,

I have now until 17h30

danifpdra commented 2 years ago

Hi @miguelriemoliveira,

The collector is saving the labels:

image

And it's also saving the image:

image

I used .tif because I think it's better for uint16 and I also had to normalize the image because it was all black otherwise. Nevertheless we should discuss this, or maybe we should also save a .txt or .csv with the original data from the depth image to have both the visualization and the true data saved in the dataset. What do you think?

miguelriemoliveira commented 2 years ago

Hi @danifpdra,

Great news. So the interaction part with the projection is all working fine now?

maybe we should also save a .txt or .csv with the original data from the depth image to have both the visualization and the true data saved in the dataset

It does not have to be a tiff , we can just save a png with uint16. I say this because png sounds more standard. Or am I missing something?

https://github.com/danifpdra/calibration_scripts/blob/0d286740b62f931be9873e8ca30e96eed4f94e95/extract_images_test_mike.py#L133-L138

Do not agree with the part of saving also a cvs, because we should never store the same information in two separate places. About the normalization, I know that if you don't do it the images are very dark (appear to be all black in some monitors). We could have a normalization to max 10 meters or so, as long as this convention is kept while saving and loading the depth images.

Can you give me a bag file for testing? I am eager to test : )

danifpdra commented 2 years ago

Hi @miguelriemoliveira ,

Ok, I did that, saved as png and multiplied by 10000. The bagfile is here: https://uapt33090-my.sharepoint.com/:u:/g/personal/danielarato_ua_pt/EVgUhNIGNtVElswLcEOEIF8BmYBHyAYGnvB52PlJVuveqg?e=emu2Nv

danifpdra commented 2 years ago

Hi @miguelriemoliveira ,

Do you have time to zoom so that we can discuss this?

miguelriemoliveira commented 2 years ago

Hi @danifpdra ,

we can do a fast talk? Start at 14h30?

danifpdra commented 2 years ago

Sure

danifpdra commented 2 years ago

Run visualization:

roslaunch mmtbot_calibration calibrate.launch dataset_file:=$ATOM_DATASETS/mmtbot/test_depth_labelling/data_collected.json run_calibration:=false

Run calibration: rosrun atom_calibration calibrate -uic -json $ATOM_DATASETS/mmtbot/test_depth_labelling/dataset.json -nig 0.0 0.0 -rv -ipg -v -phased -si

danifpdra commented 2 years ago

bag file: https://uapt33090-my.sharepoint.com/:u:/g/personal/danielarato_ua_pt/EfSn0FU4DqBGi5m6RUaTdM4Bx3SBhB05W0NOjAZK-VcPaQ?e=D5iaXV

dataset: test_depth_labelling.zip

danifpdra commented 2 years ago

@miguelriemoliveira ,

I've figured out what part of the problem is. The function getCvImageFromDictionaryDepth is defined in the dataset_io.py file and it's used in the same file to save a .png image (line 188). In line 190, I printed the image type and it's float32 when we are using the collect_data script to save the collections. This same function is used in the getPointsInDepthSensorAsNPArray function, in the objective_function.py script. Here, we use it the same exact way but the resulting image is a 3 channel 8bit image. Do you know why this is happening?

danifpdra commented 2 years ago

I've sorted this out but still the idx_limit_points are still not showing in rviz and the depth coordinated systems are floating in the air

image

danifpdra commented 2 years ago

image

We have points. Just not in the right place

miguelriemoliveira commented 2 years ago

If you push I can take a look ...

danifpdra commented 2 years ago

test_no_nan.zip

Pushed. New dataset

miguelriemoliveira commented 2 years ago

OK, let me try ... will get back to you.

miguelriemoliveira commented 2 years ago

Hi @danifpdra ,

still working on it ... can you tell me if the first guess is perfect, meaning the drawing of the points should go on top of the pattern right from the start?

In other words, did you do a set initial estimate on this dataset?

miguelriemoliveira commented 2 years ago

Got it. This was a tricky one.

image

I left the code unfinished, full of comments and prints for you to see what I changed. Please revise and delete / correct what you think is best.

danifpdra commented 2 years ago

Testing with several collections image

danifpdra commented 2 years ago

This is immensely slow calibrating, is this due to not using cache yet?

miguelriemoliveira commented 2 years ago

It depends on how long the function takes, but yes, that would be my first guess.

If using the cache does not work you have to measure the time over several lines in the objective function to see what's taking so long...

miguelriemoliveira commented 2 years ago

Done.