Closed FrancescoMandru closed 3 years ago
rangenet expects range images as described in the paper with a clear meaning of x and y coordinate in the range image. Sorting points by x coordinate does not result in a valid range image.
Regarding the labels. These are a combination of instance id (upper 16 bit) and semantic label (lower 16 bit).
And one has to remap the original labels to the rang [0,19]. Please see our code for a proper usage of the model.
@jbehley I have read all the code of your work and I'm a little confused. First, performing a spherical projection should be independent on the order of my point cloud, as every 3d point is mapped in the exact same place independently of the array order.
Secondly, i read also how to deal with the labels, but as you said to me in another issue the proper way to read them is as I wrote in this post, so I was confused about the right modality.
Third, I'm confused about the reason why we need to pass the labels to the network while I'm doing inference, why the network needs the labels? In a real scenario we don't have the true label.
Thank you so much for your time
Please see https://github.com/PRBonn/lidar-bonnetal/blob/master/train/tasks/semantic/infer.py and the corresponding and https://github.com/PRBonn/lidar-bonnetal/blob/master/train/tasks/semantic/modules/user.py, which has an infer method. When I see this correctly, there are no labels needed.
Maybe at the end the labels are not used, but this code is wrote in order to get the labels in input when you do inference, otherwise it gives you assertion errors which is not good.
Anyway I have read infer.py many times and I'm pretty sure that spherical projection and all other operations are independent on the original order of the points, in fact, if I use Visualize.py to see the inference, the result is not so bad even if there are some problems, but the IoU is very very low. I don't understand what I'm missing from geometry projection theory.
Hi,
As for the labels for inference, they are there for another script that used them for evaluation directly after inference, but this will not work on the test set. It is a bug that I have fixed internally but have not released yet. It is solved by making gt=True,
into gt=False
in this line.
As for the sorting, it should not matter, unless you're inferring in a very low resolution. I order the pointclouds in a way that will always show the closest points to the origin of coordinates in the image, no matter what the resolution of the range image is. If you use a resolution that is very lossy (512x64 for example, instead of 2048x64) then you will show the network a radically different data from what it was trained on.
I see a lot of code there that we don't know the implementation of, such as draco. I would not be surprised if they reshuffle the data in a different order not guaranteeing the equality between the points and remissions you're concatenating. Maybe you can start from there to debug what's wrong with your input pipeline. Try visualizing the data with the new shuffling, and you should at least see if your remissions are consistent (bright objects in remission such as license plates, and street signs should be consistently bright)
I close this issue since there seems not to be much activity here or the problem resolved.
Since I'm doing some preprocessing before RangeNet which shuffle the order of the point, I get very bad results. First, if I load the labels according with the documentation
lab = np.fromfile(label[i], dtype='uint32').reshape(-1,1)
I get strange values (I show to you just the np.unique(lab)):[ 0 1 30 40 44 48 50 51 52 70 71 72 80 81 99 65788 65790 131324 131326 196860 196862 589854 983070 1572875 1638411 12713994 12779530 12845066 12976138]
After that, I order the file for example by the first value (x), but before doing that, i attach the label in order to get the labels in the same order as follows:
When I do inference with RangeNet, I give to the network my ordered point clouds and I do inference. After that I compare the labels obtained with inference with the ones that i properly ordered with the above script, but the result is really really bad.
MOREOVER
Why this network need the labels in order to do inference?