ywyue / RoomFormer

[CVPR 2023] RoomFormer: Two-level Queries for Single-stage Floorplan Reconstruction
https://ywyue.github.io/RoomFormer/
MIT License
163 stars 22 forks source link

Actual 2D floor plan images as Density Map #10

Closed ogencoglu closed 1 year ago

ogencoglu commented 1 year ago

Thanks for the great work!

This is not an issue but more of a question. Can your work and implementation be utilized to detect the rooms and walls directly from a 2D floor plan image? Meaning that, if I bypass the 3D part, can actual 2D floorplan images or pdfs be somehow used as a "Density Map" in your setting?

ywyue commented 1 year ago

Thanks for your interest!

Yes, the 3D part is not the direct input to our model, so it can be bypassed as long as you have the 2D input. Please feel free to show an example image of such actual 2D floor plans! It should work if you have enough training samples: actual 2D floor plan images + annotations of rooms and walls (and windows, etc.)

ywyue commented 1 year ago

Close this issue for now. Feel free to reopen it if you have further concerns!

ogencoglu commented 1 year ago

Here are couple of examples of floorplan images.

Ia_AP2022_sommaire_0

train_0004

0002

211

So it can be quite arbitrary style. I can of course do some pre-processing (greyscale, clean the text etc.) to make it look similar to your density maps.

What would be the easiest way to try just inference (no training) using these images as density maps? Which part of the codebase would be relevant?

Also would it be possible to upload one example of density map to understand the format and structure?

Thank you!

ywyue commented 1 year ago

First, let me show you one example of a density map: 00000

The density map has 256×256 pixels, and the density value at each pixel is the number of projected points to that pixel divided by the maximum point number so that it is normalized to [0, 1] (finally re-scaled to [0,255]). So walls would have high-density values since they have many points in the vertical direction.

What would be the easiest way to try just inference (no training) using these images as density maps? Which part of the codebase would be relevant?

One hacky way is to replace one density map in the test set as your density map, e.g., 03250.png, and then run the evaluation script. Just make sure your density map is also 256 × 256. In this way, you should obtain the qualitative results quickly.

However, I don't expect the model would perform well if no training due to the large domain gap.

claumeh commented 1 month ago

Thanks for your guidance. I have tried your recommended approach with the following density image. But the results where close to zero. Here is the density image I used and replaced the original one in /train dir: I have created the png with your code based on my own point cloud. What does the point need to have in terms of requirements, e.g. density? 03255

Can you help me out here or check the density image for solid strucure?

guker commented 1 month ago

Thanks for your guidance. I have tried your recommended approach with the following density image. But the results where close to zero. Here is the density image I used and replaced the original one in /train dir: I have created the png with your code based on my own point cloud. What does the point need to have in terms of requirements, e.g. density? 03255

Can you help me out here or check the density image for solid strucure?

I have same question! but I rotate density map image so that it align axis, it works

claumeh commented 1 month ago

Thanks for the update @guker

Did you manually rotate or have code by chance? How does the result look like?