Closed Lin239522 closed 2 months ago
@Lin239522 You may need use cv2.INTER_NEAREST to resize semantic images.
@Lin239522 You may need use cv2.INTER_NEAREST to resize semantic images.
@DRosemei Thanks for the quick response! However, I did not modify the resize code in the getitem function in RoMe.:RGB image resize uses cv2.INTER_LINEAR, and semantic segmentation image resize uses cv2.INTER_NEAREST.
K = self.cameras_K[idx]
D = self.distortion
h, w = input_image.shape[:2]
input_image = cv2.fisheye.undistortImage(input_image, K, D, Knew=K, new_size=(w, h))
resized_image = cv2.resize(input_image, dsize=self.resized_image_size, interpolation=cv2.INTER_LINEAR)
…
label = cv2.fisheye.undistortImage(label, K, D, Knew=K, new_size=(w, h))
resized_label = cv2.resize(label, dsize=self.resized_image_size, interpolation=cv2.INTER_NEAREST)
In addition, in the configuration file, I set the image size to be the same as the input image (because I saw that you did this in the kitti configuration)
image_width: 3840
image_height: 1536
Is there anything wrong with my above operation?
When I reduce the image size in config to 960*384, the problem still exists
@Lin239522
label = cv2.fisheye.undistortImage(label, K, D, Knew=K, new_size=(w, h))
may cause the problem.
@Lin239522
label = cv2.fisheye.undistortImage(label, K, D, Knew=K, new_size=(w, h))
may cause the problem. Thank you for your suggestion!! ·v· This is the result without de-distortion. You can see that the lane lines on the road are curved. but the same problem occurred QAQ I am wondering that,in the visualized_output of mask2former, the boundaries of each category are drawn . Could this be the reason for the result?
@Lin239522 You can visualize label image before and after "cv2.fisheye.undistortImage" in source resolution to check.
@Lin239522 Please visualize label image before "cv2.fisheye.undistortImage", that is visualizing fisheye image by colormap. "cv2.fisheye.undistortImage" will interpolate labels and use cv2.remap instead
@DRosemei This photo is label image before "cv2.fisheye.undistortImage", I just ran infer.sh, and then visualizing fisheye image by colormap. Could this be due to the limitations of the official model, which cannot identify domestic roads?
Dear @DRosemei ,Thank you again for your enthusiastic answer. I have another question. What is the reason for the box-like defects that appear in the rendered image?
eval_117-render.png: eval_117-vis_gt_seg.png: eval_117-vis_seg.png: eval_117-blend.png:
Hey @DRosemei , you are right! The cv2.fisheye.undistortImage function did cause the problem. How can I solve it? Should I use FishEyeCameras to render the mesh instead of using cv2.fisheye.undistortImage on the input images?
@Lin239522
cv_image = cv2.imread(image_path, -1)
mapx, mapy = cv2.initUndistortRectifyMap(k, d, None, k, (cv_image.shape[1], cv_image.shape[0]), cv2.CV_32FC1)
cv_image_undistorted = cv2.remap(cv_image, mapx, mapy, cv2.INTER_NEAREST)
thanks! it works ·v·
To build the semantic segmentation dataset, I ran scripts/mask2former_infer/infer.sh
And in inference.py, change the folder path to the path of the custom dataset
However, there are noises similar to the boundary on render_gt_seg_x.png . Can you tell me whether this will affect the rendering result? How can I remove these noises? render_gt_seg: seg_sequences: sequences: