lidq92 / MDTVSFA

[official] Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training (IJCV 2021)
MIT License
83 stars 16 forks source link

Results in MSU Video Quality Metrics Benchmark #14

Open msm1rnov opened 1 year ago

msm1rnov commented 1 year ago

Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-generated content. Method took 3th place on the global leaderboard and 1th place on the no-reference-only leaderboard in the terms of SROCC. You can see more detailed results here. If you have any other video quality metric (either full-reference or no-reference) that you want to see in our benchmark, we kindly invite you to participate. You can submit it to the benchmark, following the submission steps, described here.

lidq92 commented 1 year ago

Wow~Nice work on the VQA benchmark dataset! Congratulations for the acceptance of this work by NeurIPS Datasets and Benchmarks Track. Thank you for the effort and thanks for reporting the result of MDTVSFA (also the other two methods, https://github.com/lidq92/LinearityIQA/issues/26#issue-1476745859 and https://github.com/lidq92/VSFA/issues/47#issue-1469892994) on this benchmark dataset. If we have a new advanced metric, we will submit it to the benchmark.

Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-generated content. Method took 3th place on the global leaderboard and 1th place on the no-reference-only leaderboard in the terms of SROCC. You can see more detailed results here. If you have any other video quality metric (either full-reference or no-reference) that you want to see in our benchmark, we kindly invite you to participate. You can submit it to the benchmark, following the submission steps, described here.