Closed yfaqh closed 2 years ago
@yfaqh Thank you for the comments, it is a really good question.
but it seems that the DeProCams needs to retrain in a new scene, isn't it?
Yes, DeProCams needs retraining for a new scene.
If the answer is yes, personally, the generalization ability of DeProCams is a little bit weak. It depends.
If we talk about general CV applications, such as classification and detection whose models can be trained once and applied to all other situations, then yes, the generalization of DeProCams is weak compared with them. But comparing different tasks is just like comparing apples and oranges.
If we talk about the three SAR tasks, i.e., relighting, projector compensation and shape reconstruction (which are DeProCams designed for). Compared with the other solutions, our generalization is even better by requiring fewer devices and steps .
For example, for relighting, traditional light transport matrix (LTM)-based methods require recalculation for a new scene. Moreover, they may need additional radiometric calibration or optical devices [53, 79].
For projector compensation, the SOTA traditional method TPS+SL [19, 46] also requires recalculation for a new scene. Moreover, besides the colorful sampling images they need additional SL patterns.
For shape reconstruction, traditional SL also requires recalculation for a new scene.
To the best of our knowledge, there is no previous method that can jointly addresses the three tasks. For traditional methods, to address the three tasks, they may need
While our DeProCams waives the requirements 1 and 2 above. Moreover, data collection and training do not take that long (compared with classification and detection tasks).
How to generalize DeProCams to new scenes without capturing new sampling data or retraining is definitely an interesting direction to explore, especially when we want to extend this method to dynamic projection mapping. I had a few thoughts and please feel free to add to it๐:
Thanks for bringing this up, and please let me know if you have any idea, I'm happy to further discuss this.
@BingyaoHuang I got it. Thanks for replying. Though I only recently started to study this direction, the future work based on DeProCams will definitely be interesting. Also, I'm reading CompenNet++, CompenNeSt++ and other related paper to get some ideas. I am also looking forward to discussing with you. ๐
Thanks for your interests, I'm more than happy to discuss future directions of this line of work๐.
DeProCams is a nice job. but it seems that the DeProCams needs to retrain in a new scene, isn't it? If the answer is yes, personally, the generalization ability of DeProCams is a little bit weak.