Hi authors, appreciations for your exellent work and code contribution
I once was curious about that how the model's performance could degrade when abnormal images are given. Thus I tried to feed images of pure random noises (i.e., pixels with arbitrary RGB value). Unfortunately, or suprisingly, the pretrained (epoch 34) model's performance on the given validation split remains nearly the same (as that with the normal image input).
I would like to learn from you that whether this is an expected outcome of your design, where probably the images contribute to the information gain far less than the lidar points? Or, isn't regenerating the directories depth_dense_twise & depth_pseudo_rgbseguv_twiseunder data/kitti_sfd_seguv_twise (following you workflow in SFD-TWISE) sufficient to completely alter the pseudo point cloud input?
Hi authors, appreciations for your exellent work and code contribution
I once was curious about that how the model's performance could degrade when abnormal images are given. Thus I tried to feed images of pure random noises (i.e., pixels with arbitrary RGB value). Unfortunately, or suprisingly, the pretrained (epoch 34) model's performance on the given validation split remains nearly the same (as that with the normal image input).
I would like to learn from you that whether this is an expected outcome of your design, where probably the images contribute to the information gain far less than the lidar points? Or, isn't regenerating the
directories depth_dense_twise & depth_pseudo_rgbseguv_twise
underdata/kitti_sfd_seguv_twise
(following you workflow in SFD-TWISE) sufficient to completely alter the pseudo point cloud input?Please kindly help satisfying my curiosity :)