Closed snk4tr closed 1 year ago
Kudos, SonarCloud Quality Gate passed!
0 Bugs
0 Vulnerabilities
0 Security Hotspots
7 Code Smells
No Coverage information
0.0% Duplication
Merging #348 (2fd9b4d) into master (f8be57b) will decrease coverage by
1.41%
. The diff coverage is83.91%
.
@@ Coverage Diff @@
## master #348 +/- ##
==========================================
- Coverage 92.02% 90.62% -1.41%
==========================================
Files 34 36 +2
Lines 2496 2869 +373
==========================================
+ Hits 2297 2600 +303
- Misses 199 269 +70
Flag | Coverage Δ | |
---|---|---|
unittests | 90.62% <83.91%> (-1.41%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files | Coverage Δ | |
---|---|---|
piq/feature_extractors/clip.py | 80.83% <80.83%> (ø) |
|
piq/__init__.py | 100.00% <100.00%> (ø) |
|
piq/clip_iqa.py | 100.00% <100.00%> (ø) |
|
piq/utils/common.py | 97.01% <100.00%> (+1.36%) |
:arrow_up: |
Ready for re-review.
For some reason I cannot comment directly reply to this comment so I'll do it here. @denproc nice catch and this one actually let me guess with high probability why the original implementation had this type type conversion. It turns out that:
Conv2d
only for Half precision (i.e. float16) implemented only for CUDA operations. Hence, we cannot really support it here because we want to allow both CPU and GPU computation of all our metrics.As a result, I think it will be fair to allow computation of the metric only in float32
dtype. However, nothing really stops us from working on a copy of input tensor in order to not change the actual passed tensor type. I will add a comment about float32
to the code as well.
In addition, we have to add the CLIP-IQA into the documentation. This also makes me think if we have all our metrics covered in documentation. I might check it later. UPD: Added #366
Kudos, SonarCloud Quality Gate passed!
0 Bugs
0 Vulnerabilities
0 Security Hotspots
5 Code Smells
No Coverage information
0.0% Duplication
This PR implements CLIPIQA metric described in Wang et. al. 2022. Closes #331
The main reason to implement CLIP-IQA here is to give a user an opportunity to use the metric without bringing additional dependencies (mmcv/mmedit) to the project.
Note that CLIP-IQA+ won't be implemented here because CLIP weights were fine-tuned with mmcv and hence cannot be loaded and run without it.
Note that the values of this implementation correspond to the values produced by the official CLIP-IQA implementation. SRCC scores of evaluations on public benchmarks may mismatch the ones listed in the paper. We consider official code and weights to be the ultimate source of truth and hence stick with it.
Proposed Changes
results_benchmark.py
);Some decisions that may be questioned in the future