BingyaoHuang / DeProCams

[TVCG & VR'21] DeProCams: Simultaneous Relighting, Compensation and Shape Reconstruction for Projector-Camera Systems
Other
15 stars 1 forks source link

The usage scenarios of DeProCams. #1

Closed yfaqh closed 2 years ago

yfaqh commented 3 years ago

DeProCams is a nice job. but it seems that the DeProCams needs to retrain in a new scene, isn't it? If the answer is yes, personally, the generalization ability of DeProCams is a little bit weak.

BingyaoHuang commented 3 years ago

@yfaqh Thank you for the comments, it is a really good question.

but it seems that the DeProCams needs to retrain in a new scene, isn't it?

Yes, DeProCams needs retraining for a new scene.

If the answer is yes, personally, the generalization ability of DeProCams is a little bit weak. It depends.

If we talk about general CV applications, such as classification and detection whose models can be trained once and applied to all other situations, then yes, the generalization of DeProCams is weak compared with them. But comparing different tasks is just like comparing apples and oranges.

If we talk about the three SAR tasks, i.e., relighting, projector compensation and shape reconstruction (which are DeProCams designed for). Compared with the other solutions, our generalization is even better by requiring fewer devices and steps .

For example, for relighting, traditional light transport matrix (LTM)-based methods require recalculation for a new scene. Moreover, they may need additional radiometric calibration or optical devices [53, 79].

For projector compensation, the SOTA traditional method TPS+SL [19, 46] also requires recalculation for a new scene. Moreover, besides the colorful sampling images they need additional SL patterns.

For shape reconstruction, traditional SL also requires recalculation for a new scene.

To the best of our knowledge, there is no previous method that can jointly addresses the three tasks. For traditional methods, to address the three tasks, they may need

  1. additional radiometric calibration or optical devices;
  2. additional SL;
  3. redo the data collection and calculation for a new scene.

While our DeProCams waives the requirements 1 and 2 above. Moreover, data collection and training do not take that long (compared with classification and detection tasks).

Some thoughts about improving generalization

How to generalize DeProCams to new scenes without capturing new sampling data or retraining is definitely an interesting direction to explore, especially when we want to extend this method to dynamic projection mapping. I had a few thoughts and please feel free to add to it๐Ÿ˜:

  1. There is a tradeoff between generalization and dataset size. Ideally, if we train DeProCams with a large and diverse dataset (like classification and detection), it will generalize well. But how to collect such a dataset and how to make sure the dataset can cover different projector and camera models, settings, environment lightings, scenes, etc.? Some preliminary work was presented in CompenNeSt++, where we used Blender to synthesize projector-camera setups, but still this dataset is not sufficiently large and diverse for us to waive fine-tuning for a new scene.
  2. We need to disentangle the learnable depth map dc from DeProCams, instead of directly optimizing dc as network parameters, we need a network to infer dc. For this, there are many prior works in deep learning-based depth estimation that we can refer to. Again this also requires a large and diverse dataset.

Thanks for bringing this up, and please let me know if you have any idea, I'm happy to further discuss this.

yfaqh commented 3 years ago

@BingyaoHuang I got it. Thanks for replying. Though I only recently started to study this direction, the future work based on DeProCams will definitely be interesting. Also, I'm reading CompenNet++, CompenNeSt++ and other related paper to get some ideas. I am also looking forward to discussing with you. ๐Ÿ˜€

BingyaoHuang commented 3 years ago

Thanks for your interests, I'm more than happy to discuss future directions of this line of work๐Ÿ˜Š.