Closed lotterf closed 2 months ago
I apologize for the delay in updating this part of the code due to the demands of my graduation and job search. I plan to upload the code when I have the time.
To be honest, the supervised fusion of radar images with monocular depth estimation on the nuScenes dataset hasn't produced great results. The inherent extrinsic calibration errors and the sparsity of the point cloud make it challenging. Most of the existing work is focused on aligning the output depth map with the sparse LiDAR depth, but the improvements are mostly reflected in the metrics rather than practical usability. If you're aiming for supervised learning, better ground truth depth data is essential, as nuScenes was not originally designed for dense depth estimation.
Can you provide a model trained on the nuscenes dataset for rc-net? Thank you very much.