-
How to participate:
[https://github.com/deepinsight/insightface/tree/master/challenges/iccv21-mfr](https://github.com/deepinsight/insightface/tree/master/challenges/iccv21-mfr)
Submission server:
…
-
Firstly, it's hard to donwnload training dataset(vimeo_setuplet, about 80G) due to unstable VPN
Secondly, I carefully read the document of DeepHDRVideo-Dataset, but there is not a Cinematic Wide Ga…
-
I got this error while testing my new model in http://iccv21-mfr.com/ server. I don't know the root of the problem. Is the problem caused by the model to onnx converter or the version of onnxruntime t…
-
In paper 4.4. Cross-Manipulation Evaluation,“We use the raw version for evaluation as well as the competitors.” but the most of the CVPR2021 and ICCV21 papers all use the c23 as the FF++ result to com…
-
Hi,
Thank you for the great job !
I retrain the toydesk model using the toy_desk_?.yml in confg folder. But the test result is much higher than the value in Table 1 ( psnr 25.59 for desk_top_2 and…
-
Hi all,
I read this benchmark on the model_zoo page; I'm curious about the benchmark in South Asian and East Asian. How to do these benchmarks? Which is the dataset?
https://github.com/deepinsight/i…
-
Hi,
Thanks for sharing this awesome work!
can you point out to where I can find the datasets used?
Thanks!
-
I found your model on MFR: http://iccv21-mfr.com/#/leaderboard/academic
Is that correct? Which dataset did you use for training?
Have you tried training with WF42M, and if so, what were your resul…
-
I haven't been able to upload the zip files of the models to ICCV21-MFR for a few days now. It looks like the server doesn't respond to the upload request. Can you please check if the server is still …
-
Your work is excellent, but I have a few questions:
1. In the adversarial loss for dodging attacks, why is the cosine distance between the generated identity and the target identity used as the fir…