IDKiro / DehazeFormer

[IEEE TIP] Vision Transformers for Single Image Dehazing
MIT License
369 stars 35 forks source link

reproduce the results of demo #23

Closed happy-ngh closed 1 year ago

happy-ngh commented 1 year ago

I tried to download the models and inferenced the images from https://huggingface.co/spaces/IDKiro/DehazeFormer_Demo, however, i can not get the result same as the demo.

IDKiro commented 1 year ago

The hugging face demo is running in a container, so maybe you can troubleshoot the runtime environment differences first. I can't guess exactly what you've run into, so can you give me some examples?

happy-ngh commented 1 year ago

thank you! I used the wrong model from saved_models. The correct model is in https://huggingface.co/spaces/IDKiro/DehazeFormer_Demo/tree/main/saved_models, not in this git. I have another question: why the results of models in goole drive preform worse than the smaller model in demo git?

IDKiro commented 1 year ago

They use different training datasets: 1. The model provided on GitHub corresponds to the paper and uses a synthetic dataset. The goal is to achieve good evaluation scores compared to other models, demonstrating the model's learning ability. 2. The model in the demo is trained on a mixed real dataset. The aim is to make the model perform better on real hazy images, making it more convenient for users to use.