Closed IDKiro closed 1 year ago
Our new data can boost the performance more than the new network. However, the VQ can also improve the visual quality. Besides, we think the most interesting point for VQ is that the codebook brings the ability to adjust the enhancement degree, which can not be achieved by preview learn-based approaches. Additionally, We didn't evaluate the quantitative results of other methods trained on our data. You can evaluate them by yourself. We used IQA-Pytorch for evaluation.
Thansk, I may try it.
Great idea, and the dehazing effect is also quite good. Inspired by RestoreFormer, I tried using VQGAN to achieve dehazing, but the actual results were not satisfactory, mainly because the reconstructed images were blurry. Your approach to solving the problem has inspired me, and it's excellent work.
I think I still need to ask you a question. In your proposed method, is the new synthesis pipeline more important or the model based on VQGAN? I saw a few test samples in the paper where the baseline models were trained using the new dataset, but I'm curious about the specific quantitative results (even though I know that NR-IQA metrics are not very reliable).