Open FUTUREEEEEE opened 3 years ago
Well, actually in that specific case the lidar information makes most of the job. In night-fair conditions lidar alone reaches over 83% IoU (fusion is 86%), in the case of reain, there is some improvement in case of fusion that is also mostly attributable to the lidar information. However, what we show there is that semi-supervised learning helps reaching the upperbound model, i.e. there is more information going through in the network and this makes the job.
As this is not a code issue, I suggest you to continue over e-mail if you have other questions.
Hi,Thank you for your detailed description of the method in your paper. Since i`m a beginner in the field of lidar-camera fusion, i am little bit confused about the fusion subnetwork output in the Night condition. Could you please give me some guidance on why the fusion outperform the lidar and camera when fusion combines two relatively poor features, how the RGB feature helps the fusion output in the night condition?