facebookresearch / ContrastiveSceneContexts

Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"
MIT License
224 stars 28 forks source link

downstream task semseg #8

Closed JunhyeopLee closed 3 years ago

JunhyeopLee commented 3 years ago

Hello,

I have a question on semseg downstream task on the Stanford dataset.

Thanks to provide all the log files and pretrained models. But Although dir or norm losses seem to be used for the semseg downstream task on the Stanford dataset as shown in your log file, there's no part to produce dir or norm losses in the 'downstream/semseg/lib/ddp_trainer.py, line 270-272'. To reproduce your work on the Stanford dataset, should we modify dataset.py and ddp_trainer.py to include those loss terms? (As I checked the ScanNet semseg log file, I found that those dir and norm loss terms are not used, unlike Stanford semseg task)

Thanks in advance.

Sekunde commented 3 years ago

Hello,

We do use norm loss and dir loss for instance/semantic segmentation; e.g. in our efficient scenarios as well. As we refactor the code a bit for publishing it, we split the ins/sem in the current codebase, to some extent the semseg is a subset of insseg, as we got our instance segmentation results directly building upon semantic segmentation results.

For dir and norm loss you can find it in https://github.com/facebookresearch/ContrastiveSceneContexts/tree/main/downstream/insseg

JunhyeopLee commented 3 years ago

Thank you for answering.

There are a few more questions.

  1. Then, CSC additionally used dir and norm losses and partitioning technique compared to the PointContrast (ECCV20) for Stanford semseg task?

  2. If it is true, where can I find those dir and norm loss information? Could you recommend related papers? I am sorry that I couldn't find any related info in your paper..

  3. Lastly, in table 10 in the supplementary material of your paper, all three models (scratch, PointContrast, and CSC) are finetuned by using sem loss as well as dir and norm losses?

Thanks!

Sekunde commented 3 years ago

Hello,

  1. Yes. We use partitioning in pre-training, and for the downstream tasks, we train semantic and instance segmentation together.
  2. We actually took it from PointGroup code/paper for instance segmentation purposes.
  3. For scratch, and PointContrast in Table.10, we directly took numbers from the PointContrast paper. We also didn't observe an improvement w/ dir and norm loss when training w/ PointContrast.

I attach the curves here (orange: w/o dir and norm loss, blue: w/ dir and norm loss) curve

JunhyeopLee commented 3 years ago

Thank you for answering!

Sekunde commented 3 years ago

@JunhyeopLee Hi, I use the current codebase (downstream/semseg, w/o norm and dir loss) to re-train a stanford semantic segmentationt task with our model as network initialization. I got 72.5 miou and as reference I put the curves, and log file as well as pre-trained mode here. Feel free to check it out!

JunhyeopLee commented 3 years ago

@Sekunde Thank you for the updates! I will check it out! Thanks again! :D