Open bqm1111 opened 6 months ago
By the way, Could you show me how to visualize your result as shown in Figure 3 in your paper?
Hi, I don't know what is the problem with the plotting. It seems to be matplotlib-related. I also encounter the problem on my side. I will investigate it and let you know as soon as I fix it.
I think that you mean Figure 5 in the main paper, right? For that, I exported the results in a text format, where every line has the following format: [x,y,z,R,G,B]
, where x,y,z
are 3D point coordinates and R,G,B
are red, green and blue values. Then, I visualize the results with Cloud Compare.
Is there any progress on fixing the bug? Furthermore, there is a question I would like to ask. In two consecutive frames, let's consider a single voxel representing a single point in 3D space. In those 2 frames, your model outputs 2 different labels for that voxel but one has high confidence score, one has low confidence score. Do you apply any post-processing operation to handle the variance of the outcome of your model, or do you simply choose the model prediction for the current frame?
I tried to save result by running eval.py with argument --plot-dir and got this error
What seems to be the problem here?