Open wzking09 opened 10 months ago
For question 2, I extract the 'intermediates' from first frame and fix it for the following frames. It improves the results but does it make sense?
Hi @wzking09 ,
To my knowledge, currently, the quite efficient way is to predict the params of color filters, which can lead to a lightweight and efficient network easily (yet lead to inferior performance). I'd like to recommend Harmonizer as a reference for color-filter-based methods.
For the second question, yes, the inconsistency of adjacent frames is a problem valuable to research. Currently, there exist some solutions to this problem, which you may refer to in Section V-D in the newest version of our paper.
--------------------
Your possible solution for question 2 may work for short clips, but how to generalize to long videos with varying scenes?
Thanks! I've tried Harmonizer on my own data. The predicted filter params vary a lot among frames. Maybe there is a huge gap between train/test data sources. And yes, all test data are very short videos.
Hi, It helps to save memory by spliting image into patches, but each patches are treated individually, is there another way to ultilize the global info of the original image for harmonization with limited GPUs? Another question is, I'm trying to do video harmonization, sometimes the results of adjacent frames are different while the background is quite similar. Also, the same thing happens in many other works. Is it an unsolved problem for image harmonization?