Closed andrewivan123 closed 3 months ago
This loss is used to ensure the consistency among patches. When we use m1 mode, it would be necessary to add this loss to get a satisfactory result (without fluctions among patches as shown in Fig.3. Note that Fig.3 is from the Fine branch. The result from the Fusion model is much better but still faces a similar problem).
However, compared with the consistency-aware inference strategy, the effectiveness of consistency loss is not so impressive. As shown in Tab.2, we can simply adopt a bit more patches (m1 -> m2 mode) and gain huge improvement in consistecy error.
Another reason is that we need to tune the loss weight carefully for both improvement of standard and consistency metrics. I don't have enough time to do this now.
For reproducing, people can git checkout 2d87adc9
to get more details about consistency loss. It would be good if the community would like to adopt that to the current codebase and search for parameters.
I realised that since the update to include DepthAnything, u removed the consistency loss in the code. Are there any particular reason why ?