Open HiPupilxD-Hao opened 4 years ago
Suppose I have an image that contains the walls vertically in the image. But after the pre-processing, the image is rotated by a certain degree.
When the image is inferenced & post-processed, the corners detected are at the wrong positions if using the non-pre-processed image.
For easier understanding, there is an example: This is the non-pre-processed image This is the pre-processed image The pre-processed image is rotated along the x-axis by some degrees.
If only visualized the corners after the inference, they are all looked good. The problem is after the post-processing, i.e. rotating the scene by the avg angle of the PCA. Then the corners detected on the image are shifted.
Here is a screenshot:
Sorry for late reply.
The inference.py with --visualize
can visualize the raw output (the probability map) from model.
The post-processing implemented here only support pre-processed image.
@sunset1995 Hi, thanks for the reply, the visualize argument can be used to visualize the image in 3D space without problem. May I ask how to get the corners from the original image (without pre-processing)?
Same problem here : points are moved to the left, unsure why
Same problem here. I found the vote function invalidates all XY corners, then use "best_fit = np.median(vec)". Could you explain how it works? Input: Result:
pre-processing is need.
in the paper, the author says that the pre-processing algorithm fails will to correctly align the horizontal rotation of the panorama by PCA.
so check this rotation of PCA in post_proc.py
and inversely use it in the final result.
In addition, I think pre-processing also need to rotate the label if annotate it in the original panorama image.
@sunset1995, thank you for the great work.
I have a question here: it seems the inference corners after post-processing only work on the pre-processed image (rotate by the vp). Is it possible to visualize the corners before the rotation? Cheers.