Closed danifpdra closed 2 years ago
Hi @miguelriemoliveira,
The collector is saving the labels:
And it's also saving the image:
I used .tif because I think it's better for uint16 and I also had to normalize the image because it was all black otherwise. Nevertheless we should discuss this, or maybe we should also save a .txt or .csv with the original data from the depth image to have both the visualization and the true data saved in the dataset. What do you think?
Hi @danifpdra,
Great news. So the interaction part with the projection is all working fine now?
maybe we should also save a .txt or .csv with the original data from the depth image to have both the visualization and the true data saved in the dataset
It does not have to be a tiff , we can just save a png with uint16. I say this because png sounds more standard. Or am I missing something?
Do not agree with the part of saving also a cvs, because we should never store the same information in two separate places. About the normalization, I know that if you don't do it the images are very dark (appear to be all black in some monitors). We could have a normalization to max 10 meters or so, as long as this convention is kept while saving and loading the depth images.
Can you give me a bag file for testing? I am eager to test : )
Hi @miguelriemoliveira ,
Ok, I did that, saved as png and multiplied by 10000. The bagfile is here: https://uapt33090-my.sharepoint.com/:u:/g/personal/danielarato_ua_pt/EVgUhNIGNtVElswLcEOEIF8BmYBHyAYGnvB52PlJVuveqg?e=emu2Nv
Hi @miguelriemoliveira ,
Do you have time to zoom so that we can discuss this?
Hi @danifpdra ,
we can do a fast talk? Start at 14h30?
Sure
Run visualization:
roslaunch mmtbot_calibration calibrate.launch dataset_file:=$ATOM_DATASETS/mmtbot/test_depth_labelling/data_collected.json run_calibration:=false
Run calibration:
rosrun atom_calibration calibrate -uic -json $ATOM_DATASETS/mmtbot/test_depth_labelling/dataset.json -nig 0.0 0.0 -rv -ipg -v -phased -si
@miguelriemoliveira ,
I've figured out what part of the problem is. The function getCvImageFromDictionaryDepth is defined in the dataset_io.py file and it's used in the same file to save a .png image (line 188). In line 190, I printed the image type and it's float32 when we are using the collect_data script to save the collections. This same function is used in the getPointsInDepthSensorAsNPArray function, in the objective_function.py script. Here, we use it the same exact way but the resulting image is a 3 channel 8bit image. Do you know why this is happening?
I've sorted this out but still the idx_limit_points are still not showing in rviz and the depth coordinated systems are floating in the air
We have points. Just not in the right place
If you push I can take a look ...
Pushed. New dataset
OK, let me try ... will get back to you.
Hi @danifpdra ,
still working on it ... can you tell me if the first guess is perfect, meaning the drawing of the points should go on top of the pattern right from the start?
In other words, did you do a set initial estimate on this dataset?
Got it. This was a tricky one.
I left the code unfinished, full of comments and prints for you to see what I changed. Please revise and delete / correct what you think is best.
Testing with several collections
This is immensely slow calibrating, is this due to not using cache yet?
It depends on how long the function takes, but yes, that would be my first guess.
If using the cache does not work you have to measure the time over several lines in the objective function to see what's taking so long...
Done.
Hi @danifpdra ,
I have now until 17h30