Closed steven5401 closed 4 years ago
Hi @steven5401 , thanks for your interest in our work. The supplement will be included in an upcoming expanded Arxiv preprint. Here's a table that should address your question:
Let us know if you have further questions.
Thank you very much. Why didn't you choose scannet as training set, but test set?
No problem. We had a large number of concerns about the quality of the 2d labels in ScanNet. As you know, ScanNet is also annotated in 3d, not 2d. Unlike Matterport3d and Stanford-2d-3d, ScanNet does have a 2d label benchmark. Because of its noisy nature, we didn't want to train on it, but it is a good proxy for performance in the real-world.
Here are a few examples from ScanNet:
Book vs. bookshelf seems arbitrary:
The non-white pixels in the center should be "Wall", but include "blinds"
The door label extends onto the glass window:
Bed and backpack boundaries are poorly aligned:
Bicycle extent is quite incorrect:
I understand it. Thank you for the prompt reply.
Hello, I want to know the datasets that were not used in MSeg and the reasons for not including them. However, I can not find supplement through google. Can you give me the link of supplement?