Open houshuaipeng opened 1 year ago
Hi Houshuai,
Thanks for your interest in our work!
From the rendering results, I guess the issue is in the coordinate system. The direction of the motion-blur-like lines in the depth rendering seems problematic. Have you addressed the issue?
Thank you for your great work! I'm also having similar issues with custom dataset. I only use images, intrinsics, extrinsics, and ego pose for training. The model outputs motion-like-blurry rgb and poor depth map despite getting a high psnr and ssim score. Have you solved your issues? Thank you.
你好,问一下,关于新场景生成,是怎么进行的哎。我不太清楚
Hello,
I have a question regarding the use of a custom dataset. I've transformed the camera intrinsics and extrinsics as instructed. Additionally, I've computed the vehicle pose using Euler angles. The dataset is initialized with the first frame's pose as the origin in the world coordinates.
During training, I only utilize 2D images, camera intrinsics and extrinsics, and the vehicle pose. However, I'm encountering an issue where the trained model produces blurry results, and the depth maps are of poor quality. I've already adjusted the ORIGINAL_SIZE in the datasets/waymo.py file, and the custom dataset follows the same coordinate system as Waymo.
I'm trying to understand why the training results are blurry and why the depth maps are not satisfactory. Any insights or suggestions on how to improve the clarity and quality of the depth maps would be greatly appreciated.
Thank you! depth: gt_rgb: inference_rgb: