valeoai / xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Other
194 stars 36 forks source link

The "PL" experiment #6

Closed chester256 closed 4 years ago

chester256 commented 4 years ago

The paper says:

Regarding PL, we apply [17] as follows: we generate pseudo-labels offline with a first training without UDA, and discard unconfident labels through class-wise thresholding. Then, we run a second training from scratch adding PL loss on target. The image-2-image translation part was excluded due to its instability, high training complexity and incompatibility with LiDAR data, thus limiting reproducibility.

So for [17], the domain adversarial training part is also excluded? Is the pseudo-label part the only kept part? Thanks

maxjaritz commented 4 years ago

Yes, that's right. We also exclude the domain adversarial part.

chester256 commented 4 years ago

Have you ever tried domain adversarial training? I have tried AdaptSegNet, but it seems that it will bring negative transfer.

maxjaritz commented 4 years ago

Sorry for the late reply. No, we have not tried domain adversarial training. It would be interesting, but would need to be carried out separately for 2D and 3D. For 3D, there is very few work on this. PointDAN (https://arxiv.org/abs/1911.02744) does only classification, not segmentation.

maxjaritz commented 4 years ago

I am closing this as the question seems to be answered.