Closed Obegg closed 1 year ago
(1) The site you linked does a number of things really poorly, like the way they test upscaling is by downscaling with magick convert image_gray.png -filter catrom -resize 50% downscaled.png
, which doesn't happen in linear light, and is therefore incorrect. I wouldn't trust that blog post. It also predates color-managed PNG screenshots, it takes the arithmetic mean of several very poor metrics (e.g. PSNR, SSIM) with others, and makes some very fundamental mistakes like even treating "anime4k" as though it's a scaler (it's not).
spline36 and mitchell generally outperform ewa_lanczos on live-action content, where it tends to oversharpen. If you primarily watch anime, feel free to use ewa_lanczos.
(2) linear-downscaling=yes
the only reason I disagree is because I use SSimDownscaler.glsl and it requires linear-downscaling=no
If your shader requires linear downscaling to be disabled then your shader either linearizes by itself or it's wrong. In either case, it doesn't make sense to disable downscaling in linear light because you personally use a 3rd-party shader. If you "disagree" you can just override it. Downscaling outside of linear light is not how you downscale. These are intended to be sane defaults, not the best for your personal computer and your personal setup.
(3) sigmoid-upscaling
also confuses me:
When upscaling, use a sigmoidal color transform to avoid emphasizing ringing artifacts. This is incompatible with and replaces --linear-upscaling. (Note that sigmoidization also requires linearization, so the LINEAR rendering step fires in both cases)
(4) deband should absolutely be no (default) in order to be considered "high quality". mpv's deband algorithm is very lightweight and significantly increases quality on the vast majority of content (i.e. content that isn't already debanded). If you watch content that is primarily debanded in advance (e.g. through f3kdb) then you should disable it on your own system. These are intended to be sane defaults, not the best for your personal computer and your personal setup.
I have no comment on hdr-peak-percentile and hdr-contrast-recovery, those are better questions for @haasn. Especially considering that hdr-peak-percentile is not documented.
Just to add a couple of things to the above comment
cscale
Technically yes, but you're wasting gpu cycles on something that really (imo) perceptible outside of unrealistic cherrypicked samples for showing off your meme algorithm. I don't think chroma scaler really matters too much, most of the time using memes can sometimes result in worse output. I can't see the difference between even spline36 and KrigBilateral on most content unless I bind a key to toggle between them, zoom in 200% and pause the video and spam the key to see if there are any differences.
dscale
no, for downscaling ewa_lanczos is the sharpest but results in ringing artifacts. There are some antiringing stuff merged in libplacebo that might help with this but I haven't done extensive testing yet. Generally you want to use mitchell or catmull_rom if you want it to be slightly sharper than mitchell.
dither-depth
auto doesn't actually work with vulkan API for some people, you should set a specific value instead.
If you're going to refer to my meme blog at least use the newest version of the benchmarks: https://artoriuz.github.io/blog/imagemagick_resampling.html
In any case, I'll quote myself to make things clear:
As usual, the filters are ranked based on full-reference distortion metrics that may not always correlate with the human perception of quality, and your personal preference is entirely subjective.
I have a few doubts about the values that are specified for this profile.
profile=gpu-hq
is meant to give a good baseline of defaults if you have a decent iGPU or a dedicated GPU. mpv works OOTB and has good defaults, but rendering and GPU-heavy tasks are purposefully left up to the user's discretion since mpv makes no assumptions about your hardware outside of the minimum requirements.
You have to keep in mind that scaler preference is subjective, but I can give you some pretty solid justifications for why all of these settings are the default in gpu-hq
, and don't need to be changed.
for scale isn't ewa_lanczos the higher quality one instead of spline36? for cscale isn't ewa_lanczos the higher quality one instead of spline36?
Even though ewa_lanczos
gets processed as a 1D LUT by mpv, it's still relatively very slow. spline36
is a polynomial approximation of tensor lanczos
, which makes it relatively fast despite being a good quality scaling kernel. spline36
is actually slightly sharper than ewa_lanczos
, but the main advantage that ewa_lanczos
has over spline36
is that it's really good at anti-aliasing. Anti-aliasing becomes more relevant at higher scale factors, but whether or not you can actually tell the difference between ewa_lanczos
and spline36
at lower scale factors is debatable. spline36
also rings less than ewa_lanczos
, but since ewa_lanczos
is an elliptical filter, the ringing is much softer, which could be less noticeable depending on the content.
Edit: Just noticed that ewa_lanczos
was replaced with lanczos
. But spline36
and lanczos
are basically identical, because like I said, spline36
is a polynomial approximation of lanczos
. If you really want to get into the weeds about it, spline36
has slightly less ringing and aliasing than lanczos
in exchange for slightly less acutance. But the difference is basically imperceptible.
for dscale isn't ewa_lanczos the higher quality one instead of mitchell?
mitchell
is soft but avoids the moiré effect and doesn't result in noticeable ringing for downscaling. Usually the sharpest you can get away with for downscaling is catmull_rom
, which is another high quality bicubic filter and it only overshoots once.
linear-downscaling=yes the only reason I disagree is because I use SSimDownscaler.glsl and it requires linear-downscaling=no.
Downscaling in gamma light is incorrect. For context, igv only recommends downscaling in gamma light because it makes ringing less noticeable, which is his own personal preference. Linear light will preserve bright details, and is typically more accurate to the source image.
sigmoid-upscaling also confuses me, I actually use FSRCNNX and have no idea what the value do and it's importance.
Upscaling in linear light sucks because usually we're using a ringy kernel to upsample the image (even a soft bicubic like mitchell
has overshoot that will be overemphasized during upsampling). Sigmoidal upscaling is basically a solution to de-emphasizing ringing artifacts.
deband should absolutely be no (default) in order to be considered "high quality"
mpv's default debanding settings are tuned to be universal for both good and bad sources. Theoretically, it could result in detail loss for a source that has already been filtered for banding, but mpv's defaults don't seem aggressive enough to actually be a net-negative in this scenario. Disabling deband
is only for peace-of-mind.
Thank you all for your answers! There are some things I did not know and now I do.
The site you linked does a number of things really poorly
Oh, but I couldn't find any other source of information about this subject. Do you happen to know a good source of information that does the same?
These are intended to be sane defaults, not the best for your personal computer and your personal setup.
I was not thinking about "the best for your personal computer", I was thinking about "high quality". I was thinking about which values are going to result with the higher quality.
mpv's deband algorithm is very lightweight and significantly increases quality on the vast majority of content (i.e. content that isn't already debanded).
deband=yes
significantly increases quality? I admit it sounds quite hard to believe, because if you watch "high quality" (Blu-Ray) content, then it makes sense to use deband=no
in order to not mess with the source video metirial, right?
auto doesn't actually work with vulkan API for some people, you should set a specific value instead.
I am aware of that issue, but I use D3D11, which doesn't have that issue.
If you're going to refer to my meme blog at least use the newest version of the benchmarks
Why are you calling it a "meme blog"? I was actually impressed with information. Also I did not know there is a new version of benchmarks.
Even though ewa_lanczos gets processed as a 1D LUT by mpv, it's still relatively very slow.
(It might seem like a stupid question to you) But why should you care about "slow" if the quality is better? It's video, not a game where you should care about latency.
spline36 is actually slightly sharper than ewa_lanczos, but the main advantage that ewa_lanczos has over spline36 is that it's really good at anti-aliasing. Anti-aliasing becomes more relevant at higher scale factors, but whether or not you can actually tell the difference between ewa_lanczos and spline36 at lower scale factors is debatable. spline36 also rings less than ewa_lanczos, but since ewa_lanczos is an elliptical filter, the ringing is much softer, which could be less noticeable depending on the content.
To summerise:
spline36
sharper than ewa_lanczos
ewa_lanczos
better at anti-aliasing.
spline36
rings less than ewa_lanczos
.
You have made some good points here.
I actually need anti-aliasing because I watch 1080p content on 4k display.
I'll switch to gpu-hq
from now on, I'll try it.
mitchell is soft but avoids the moiré effect
I did not even know about this effect untill I searched it.
Downscaling in gamma light is incorrect. For context, igv only recommends downscaling in gamma light because it makes ringing less noticeable, which is his own personal preference. Linear light will preserve bright details, and is typically more accurate to the source image.
Thank you for! This is very informative!
deband=yes significantly increases quality? I admit it sounds quite hard to believe, because if you watch "high quality" (Blu-Ray) content, then it makes sense to use deband=no in order to not mess with the source video metirial, right?
Banding artifacts are typically associated with highly compressed/re-encoded content, but quite a few Blu-ray sources do indeed suffer from artifacts such as banding and aliasing, especially if the digital mastering was done terribly. Unfortunately, there is a surprisingly high amount of studios that don't really care about this kind of stuff when mastering BDs.
This is why I made the caveat that turning deband
off could be a net-positive if you are 100% certain your source has been filtered for banding artifacts and other forms of quantization noise. But even then, people who do this filtering during encoding do it under the assumption that your TV or media player already has its own associated debanding filters that will be doubling on top of the pre-filtering.
But why should you care about "slow" if the quality is better?
The initial questioning was related to why gpu-hq
defaults to spline36
instead of ewa_lanczos
. And if we were to make a relative comparison between all of the scalers, ewa_lanczos
is significantly more taxing than spline36
. You can still drop frames in a video. Also, there are some users who don't want to tax their systems while watching a video due to various reasons (higher power consumption, louder fan noises etc.).
To summerise:
spline36
sharper thanewa_lanczos
ewa_lanczos
better at anti-aliasing.spline36
rings less thanewa_lanczos
. You have made some good points here. I actually need anti-aliasing because I watch 1080p content on 4k display. I'll switch togpu-hq
from now on, I'll try it.
Keep in mind that the differences here are in small orders of magnitudes, especially if your scale factor is only 2X (1080p->2160p).
Also, you mentioned that you use FSRCNNX, which is a luma doubler. Your --scale
won't be kicking in anyways at a 2X factor.
studios that don't really care about this kind of stuff when mastering BDs
I see your point, but even if you have such artifacts on such high quality content (not talking about re-encodes, only master BD), it was the "creative intent" to have those artifacts, and by using deband=yes
you are no longer seeing how the video should have been seen.
If the studio released their BDs with artifacts then this is how it should be seen.
(Or am I wrong here?)
why gpu-hq defaults to spline36 instead of ewa_lanczos ... users who don't want to tax their systems
But if users don't want to tax their systems then why won't they use the default values and not the gpu-hq profile, which by it's name means "high quality"? I mean this profile is (I think) intended to be the highest quality profile, without thinking about taxing the system, the only purpose is to get the best values for each settings, and if as a result this high quality profile is too taxing then you shouldn't use it, right?
it was the "creative intent" to have those artifacts
Not really. Those artifacts are produced during the mastering process when the studio is preparing to ship out retail copies, and is mostly caused by incompetence, not creative intent. It's not uncommon to have one revision of the BD release that is littered with artifacts, and another revision of the BD that doesn't have said artifacts. You can even have BD revisions that have completely different qualities of chroma information due to different subsampling techniques. This is because different companies take different levels of care when they're digitally mastering the original content that is provided to them for distribution. You might have a German BD release of a film that is significantly higher visual fidelity than a USBD release of the film just because the distribution of the content was handled differently.
you are no longer seeing how the video should have been seen.
Just to be clear, if you truly believed this, then you would be against all forms of visual interpolation in general. If you're upscaling the image, you're not watching the video how it "should have been seen". Upscaling is an imperfect solution to a common problem, because the only way to watch something the way it was intended is to go to a movie theater and watch it on the silver screen. This extends out to debanding as well. I could very well make the argument that the original film never intended to have blocky banding artifacts caused by digital mastering, and by debanding it I'm getting closer to watching the intended picture.
I see your point, but even if you have such artifacts on such high quality content (not talking about re-encodes, only master BD), it was the "creative intent" to have those artifacts, and by using
deband=yes
you are no longer seeing how the video should have been seen. If the studio released their BDs with artifacts then this is how it should be seen.
@dokur0chan summarized it nicely. I can add that you would be surprised how many official releases are low quality. Wrong levels, wrong colorspace, terrible upscaling, very heavy DNR and so on. You cannot take anything for granted. There is community of people who compares different releases around the world, to find best looking one, in most cases, the least broken one.
hdr-peak-percentile confusing for me, I'm not sure if it's supposed to be 100 or 99.995 (for high quality profile) and what difference does 0.005 makes on a scale from 1-100?
It is a solution to not measure overly high peak brightness of a scene when it is not appropriate. Often times it is single specular reflection or even encoding artifact. We have seen some files (mostly demos) that had 9000 nits measured, while only few pixels were that bright and the actual scene were ~1200 nits. This is huge difference that results in significantly darker image than it should be. 99.995% were picked empirically to not clip any real highlights, but filter out those really small areas that it is better to clip.
I personally don't think this option should be exposed in mpv. It is not something that users should adjust. It is implementation detail, but since mpv aims to map every single option, here we are.
This option exists because 100% is faster on slow iGPUs, histogram is not computed in this case. Simple on/off switch would be enough, you don't want to really change this value to any other.
hdr-contrast-recovery I guess is for HDR(video)>SDR(display), so I don't know which value is better (I use HDR>HDR)
You have to check yourself, what value you prefer. I think default is picked to do slight adjustment, that is most of the time improvement. But best if you adjust it to your likings.
This is the current builtin.conf for gpu-hq
I think gpu-hq
is good starting point, if you have powerful enough GPU. I personally would like even more granular profiles like fast
, default
, hq
. But current status quo is good enough and it is always difficulty to agree on one "blessed" settings/preset. So it is left for users to tinker and adjust.
Thank you for everyone here for helping me out, and answering my question.
I now use profile=gpu-hq
with every value as described on profile.
(I also use gpu-next
with the following shaders: FSRCNNX and KrigBilateral.glsl and SSimDownscaler.glsl )
Might be unrelated: I have a few suggestions to add to the gpu-hq
profile, such as scaler-lut-size=10
(since you want to improve the accuracy of scaler), or opengl-pbo=yes
(since this can be faster, especially if the source video size is huge (e.g. so called "4K" video)), or dither=ordered
(I don't remember where I read it but I believe it's better than the default - fruit), or fbo-format=rgba32f
(since it can influence performance and quality of the video output), all of those values seems to (by it's descriptions) to improve quality, so I thought to mention it here at least.
(Again - if I am wrong about something here feel free to correct me)
@Obegg
Why are you calling it a "meme blog"? I was actually impressed with information. Also I did not know there is a new version of benchmarks.
The reasons are mostly explained in the IM benchmarking page I linked. But I've also rewritten the mpv one this afternoon to prevent this kind of thing from happening again.
In short, the old page was an organic mess. It evolved over time as people suggested me changes but it wasn't very coherent. The new blog posts are much more consistent and have most of the flaws either fixed or remediated.
Still, the question those blog posts try to answer is "if we downsample the image, throwing away high-frequency information, which scaler does the best job are reconstructing this missing information?"
If we had a image quality metric that correlates 100% with the human perception of quality then maybe we would be able to answer this question. But we don't, which is why all of those pages employ a variety of metrics instead. Also, I'm now sticking with the standard distortion metrics because they're pretty much a solved science, nobody is seriously trying to replace them with a new set of metrics, most work is being put into researching perception metrics instead. MAE, PSNR and SSIM are honestly good enough if we just want to know which scaler is getting closer to the reference.
But the remark I left is important, being "closer to the reference" may not result in a better looking picture for you. Polar Lanczos is measurably blurrier than orthogonal Lanczos for example, but it's also much better at preventing aliasing, which is not something these metrics can capture very well.
TLDR: There's no true winner here, it's mostly about taste. Feel free to use my blog posts as a reference to understand these things, but refrain from using them as a reason to change the defaults. mpv devs know what they're doing, they're certainly aware that Lanczos is sharper than spline36, but spline36 also has its advantages (rings/aliases less) which is why it's the default. If they were willing to change the defaults I would actually advocate for polar lanczos instead.
Also, I'm now sticking with distortion metrics because they're pretty much a solved science, nobody is trying to come up with new distortion metrics, and they work well enough when we just want to know which scaler gets closer to the reference.
SSIM has a notable flaw where it corrects for gamma on already corrected content, which makes it somewhat mathematically dubious.
And it's not true that new metrics aren't being introduced - D-SSIM, Butteraugli, and SSIMULACRA2 are all quite good, fairly recent, and much better than crap like PSNR or original SSIM.
@Obegg
scaler-lut-size=10
This should be left at default. Increasing it doesn't positively or negatively affect the picture quality.
opengl-pbo=yes
--vd-lavc-dr
accomplishes a similar goal, but on steroids. --vd-lavc-dr
should be left at auto
(the default).
dither=ordered
I'm don't know a lot about dithering, but isn't fruit
considered superior to ordered dithering when it comes to static dither patterns? I'm pretty sure mpv's default dither behavior applies a blue noise matrix that evenly distributes the dithering, creating a high quality and unnoticeable dither pattern (if you can see your dither pattern during playback, then the purpose of dithering has failed).
fbo-format=rgba32f
No-op on gpu-next
. But even on gpu
, this cargo-cult was arguably libplacebo.
I suggest you stay wary of configs you find on the internet, as the typical mpv.conf guides tend to be outdated or misleading.
@Obegg Thank you for asking this question. The responses have improved my understanding of these various settings considerably. Like you I also wanted the best possible video quality, and since I have a high-end PC, the performance hit did not matter that much to me.
I started with --gpu-hq
profile, but changed the --scale
and --cscale
to --ewa_lanczossharp
(--ewa_lanczos
if you are using --vo=gpu-next
), because the dev's recommended it as giving the best quality. I personally could not tell the difference between --spline36
and --ewa_lanczossharp
in real world testing.
I also tried and tested various third party scalers and shaders, but either did not notice any significant improvement, or the slight improvement did not justify the huge increase in gpu resource consumption. So I stopped using them.
Eventually, I settled with --gpu-hq
with --ewa_lanczos
as the --scale
/--cscale
filter, and with the following settings added, gives the best possible video quality by mpv on my system.
--vo=gpu-next
, --ao=wasapi
, --gpu-api=vulkan
(this gives better picture quality then d3d11 on my system), --hwdec=auto
(the following three are not necessary but provide an overall improvement on my system)
--video-sync=display-resample
, --interpolation=yes
, --tscale=oversample
And thats all the custom settings I use. Hope this information helps. Keep in mind that I am not an expert on these settings, and the perception of video quality is very subjective, varies from person to person, and from system to system. What works for one may not work for another.
~Raza
I have two questions regarding the following two settings in --gpu-hq
:
--hdr-peak-percentile=99.995
--hdr-contrast-recovery=0.30
1-Do these settings apply to HDR content being played on a HDR display, or to HDR content being played on a SDR display, or to both?
2-If one sets their GPU to control the video color setting, do these above two settings still apply or does the GPU override them? For example, for Nvidia GPU's you can set the option under "Video>Adjust video color settings" to have Nvidia set the video color (brightness, contrast, saturation, hue) and Gamma instead of individual applications.
Thanks,
~Raza
I have a few doubts about the values that are specified for this profile.
profile=gpu-hq
is meant to give a good baseline of defaults if you have a decent iGPU or a dedicated GPU. mpv works OOTB and has good defaults, but rendering and GPU-heavy tasks are purposefully left up to the user's discretion since mpv makes no assumptions about your hardware outside of the minimum requirements.You have to keep in mind that scaler preference is subjective, but I can give you some pretty solid justifications for why all of these settings are the default in
gpu-hq
, and don't need to be changed.for scale isn't ewa_lanczos the higher quality one instead of spline36? for cscale isn't ewa_lanczos the higher quality one instead of spline36?
Even though
ewa_lanczos
gets processed as a 1D LUT by mpv, it's still relatively very slow.spline36
is a polynomial approximation of tensorlanczos
, which makes it relatively fast despite being a good quality scaling kernel.spline36
is actually slightly sharper thanewa_lanczos
, but the main advantage thatewa_lanczos
has overspline36
is that it's really good at anti-aliasing. Anti-aliasing becomes more relevant at higher scale factors, but whether or not you can actually tell the difference betweenewa_lanczos
andspline36
at lower scale factors is debatable.spline36
also rings less thanewa_lanczos
, but sinceewa_lanczos
is an elliptical filter, the ringing is much softer, which could be less noticeable depending on the content.Edit: Just noticed that
ewa_lanczos
was replaced withlanczos
. Butspline36
andlanczos
are basically identical, because like I said,spline36
is a polynomial approximation oflanczos
. If you really want to get into the weeds about it,spline36
has slightly less ringing and aliasing thanlanczos
in exchange for slightly less acutance. But the difference is basically imperceptible.for dscale isn't ewa_lanczos the higher quality one instead of mitchell?
mitchell
is soft but avoids the moiré effect and doesn't result in noticeable ringing for downscaling. Usually the sharpest you can get away with for downscaling iscatmull_rom
, which is another high quality bicubic filter and it only overshoots once.linear-downscaling=yes the only reason I disagree is because I use SSimDownscaler.glsl and it requires linear-downscaling=no.
Downscaling in gamma light is incorrect. For context, igv only recommends downscaling in gamma light because it makes ringing less noticeable, which is his own personal preference. Linear light will preserve bright details, and is typically more accurate to the source image.
Well its not just igv's preference, the same way you do sigmoidal upscaling to de-emphasizing ringing artifacts, why not do the same for downscaling? Times changed now we have bigger and bigger source resolutions that need to be squeezed into 1080p 1440p screens, not to mention luma doubler's that in most cases give better results even when used for small upscaling factors one example is FSRCNNX that is commonly used.
linear-downscaling=yes
vs.
linear-downscaling=no
Why not use same approach as upscaling when you can see how much ringing is removed in bright areas when downscaling with linear-downscaling=no
? Not to mention there is not a huge difference in the image other than less ringing near bright areas:
Another one: linear-downscaling=yes (IMO terrible)
vs.
linear-downscaling=no
All diff:
IMHO if you would run PSNR on linear-downscaling=yes vs linear-downscaling=no my bet would be that linear-downscaling=no would win so the "is typically more accurate to the source image" is not true, ringing is atypical and a lot of algorithms would pick it up as deviation from norm.
Update - With actual tests for typical downscale:
Only soft scalers "benefit" from linear-downscaling=yes
Check out ewa_lanczossharp to get an idea from above test.
sigmoid-upscaling also confuses me, I actually use FSRCNNX and have no idea what the value do and it's importance.
Upscaling in linear light sucks because usually we're using a ringy kernel to upsample the image (even a soft bicubic like
mitchell
has overshoot that will be overemphasized during upsampling). Sigmoidal upscaling is basically a solution to de-emphasizing ringing artifacts.deband should absolutely be no (default) in order to be considered "high quality"
mpv's default debanding settings are tuned to be universal for both good and bad sources. Theoretically, it could result in detail loss for a source that has already been filtered for banding, but mpv's defaults don't seem aggressive enough to actually be a net-negative in this scenario. Disabling
deband
is only for peace-of-mind.
You have to keep in mind that scaler preference is subjective, but I can give you some pretty solid justifications for why all of these settings are the default in gpu-hq
Honestly I didn't really test spline for upscaling since I use external upscalers but for downscaling I just couldn't find a good solution and mitchell seemed like it had the least noticeable side effects.
Catmull was pretty good if the source was okay but seemed to fall apart on lower res/poorly encoded content. I tried all the others and lancoz (any variant except blurry r2) resulted in horrendous ringing/aliasing or other artifacts. Hermite makes stuff like clouds look horrendous and doesn't perform any faster on my system but apparently it's very fast. And the others each excelled in one area but suffered greater in others so I couldn't consider them... I had some luck with Polar versions but they were extremely heavy so weren't worth using for the cost.
In the end I settled on a slightly sharper version of Mitchell (dscale-param1=0.3 dscale-param2=0.35). I have no idea if it should be based on based on math numbers but that's what I personally find has the best tradeoffs.
Hermite makes stuff like clouds look horrendous
Can you provide a sample or comparison? Just curious
Can you provide a sample or comparison? Just curious
Couldn't find the show I noticed it most on but basically anything that should be blurry (depth of field) looks really bad, clouds are just something I easily noticed looking wrong.
Here's a comparison of some clouds from Mysterious Girlfriend X (20:34 ep1) from HiDive upscaled to 4k then downscaled to 1440p (or 2488x1400 in my case): Hermite: https://i.imgur.com/0LakXh1.png Mitchell: https://i.imgur.com/eLZv6Md.png Sharper Mitchell: https://i.imgur.com/3AmYgOv.png No downscaling (just upscaling with MPV default lanczos): https://i.imgur.com/uEUbP53.png
EDIT: It seems harder to spot in real 4k content but oddly I can easily spot a difference in colours :/ EDIT: Real content - https://www.youtube.com/watch?v=lx9qeRLUYs8 4k to ~1980p Hermite: https://i.imgur.com/fF3FDEO.png Mitchell (0.3/0.35): https://i.imgur.com/uVXzIgz.png
The mitchell image seems to actually be sharper in the area of focus (due to the contrast in blur?) white the hermite image suffers from darkening the image too much.
@llyyr, he has a special user case, see: https://github.com/mpv-player/mpv/pull/12384#issuecomment-1717754013
@llyyr, he has a special user case, see: #12384 (comment)
I have edited my shaders a bit glsl-shaders="~~/shaders/ravu-zoom-ar-r3g.hook;~~/shaders/FSRCNNX_x2_8-0-4-12.glsl;~~/shaders/Anime4K_CNN_x2_VL.glsl;~~/shaders/FSR5.glsl;~~/shaders/nlmeans_light7.glsl;~~/shaders/Anime4K_Thin_HQ7.glsl;~~/shaders/KrigBilateral.glsl"
Using FSR with a very low amount of sharpening provided better results than using a sharper downscaler, the thinning is very tiny too which is to help remove aliasing. Obviously it is niche but I'm still able to notice it, although to a lesser degree (since I would reduce or remove FSR sharpening if using a sharper downscaler) when using just FSRCNNX or using no shaders.
Dude, your source has banding & blocking of its own. Hermite is just preserving the source characteristics better. If you want to smooth all the garbage out when downscaling then just use a blurry filter like b=1, c=0, lol.
Still, the use of so many shaders, in my opinion, completely invalidates any downscaling quality assessment.
Use 4K and 8K videos and downscale to your target res with:
mpv --no-config --vo=gpu-next --gpu-api=vulkan --correct-downscaling=yes --linear-downscaling=yes --dscale=your_downscaler --keep-open=yes --fs=yes --screenshot-format=png --screenshot-tag-colorspace=yes --screenshot-high-bit-depth=no
To take scaled screenshots on mpv use "Ctlr+s"
You can also test with huge image like this one.
Dude, your source has banding & blocking of its own. Hermite is just preserving the source characteristics better. If you want to smooth all the garbage out when downscaling then just use a blurry filter like b=1, c=0, lol.
Yeah, that's the problem though, I need a downscaler that can look good on ALL content, which is why I was saying mitchell had the least side affects and wasn't a bad default. Not everyone plays Blu-ray sourced content.
Yeah, that comparison is kind of pointless when it goes through so many meme shaders. If anything, Hermite is doing a better job at preserving the artifacts introduced by them.
There's no reason to judge what people do with their configs, since at this point peace of mind and preferences is more important, so if seeing your GPU at 100% load gives you peace of mind then go for it. But you should try using only one of ewa_lanczossharp/ewa_lanczos4sharpest, ravu-zoom-ar or nnedi3 for upscaling maybe, and see which you like more.
And it's not true that new metrics aren't being introduced - D-SSIM, Butteraugli, and SSIMULACRA2 are all quite good
Neither of them is good for evaluating of scalers. SSIM with MAD pooling is much better in this case (python impl).
Downscaling in gamma light is incorrect.
Wrong.
For context, igv only recommends downscaling in gamma light because it makes ringing less noticeable, which is his own personal preference.
Also wrong. Why are you spreading FUD?
There's no reason to judge what people do with their configs, since at this point peace of mind and preferences is more important, so if seeing your GPU at 100% load gives you peace of mind then go for it. But you should try using only one of ewa_lanczossharp/ewa_lanczos4sharpest, ravu-zoom-ar or nnedi3 for upscaling maybe, and see which you like more.
I never load it at 100%, I aim for 60% at max. If I had Blu-ray source then sure they would probably be enough but streaming content from Crunchyroll and especially HiDive can look really nasty. Trying to fix that is the whole reason I started using MPV, otherwise I'd probably still be using MadVR... While I personally don't like Hermite and don't see the performance difference on my machine, benchmarks show it can be quite a bit faster. Having a better upscaler seems to make a larger difference than the downscaler as most people are probably playing smaller resolution files than their monitor and the people who don't have a 4k monitor probably have much weaker GPUs so down-scaling performance matters more.
Also wrong. Why are you spreading FUD?
This reaction image was downscaled in linear light BTW 😉👍
And it's not true that new metrics aren't being introduced - D-SSIM, Butteraugli, and SSIMULACRA2 are all quite good
Neither of them is good for evaluating of scalers. SSIM with MAD pooling is much better in this case (python impl).
Base SSIM corrects for gamma twice. It's fundamentally broken. You also linked to an implementation for D-SSIM so I don't see what you are trying to say.
Base SSIM corrects for gamma twice.
It doesn't, don't be stupid.
You also linked to an implementation for D-SSIM so I don't see what you are trying to say.
D-SSIM is tweaked MS-SSIM with MAD pulling. I linked to implemantation of MS-SSIM (with MAD pulling) but at fine scale only.
@igv I can test stuff with the ones in green, I use D-SSIM not sure its comparable or better than the orange ones in the table if use default d-ssim tool (the exe), the red ones I'm trying to find an implementation so far no luck:
@article{ding2020iqa, title={Image Quality Assessment: Unifying Structure and Texture Similarity}, author={Ding, Keyan and Ma, Kede and Wang, Shiqi and Simoncelli, Eero P.}, journal = {CoRR}, volume = {abs/2004.07728}, year={2020}, url = {https://arxiv.org/abs/2004.07728} }
Never mind found this https://github.com/dingkeyan93/IQA-optimization/tree/master
This is the current
builtin.conf
forgpu-hq
:(This question might seem like a stupid one, and if it is - please correct me if I'm wrong about something)
I have a few doubts about the values that are specified for this profile. for
scale
isn'tlanczos
the higher quality one instead ofspline36
? forcscale
isn'tlanczos
the higher quality one instead ofspline36
? fordscale
isn'tlanczos
the higher quality one instead ofmitchell
? (Based on https://artoriuz.github.io/blog/mpv_upscaling.html )dither-depth=auto
I agree with this value since it auto-detects.hdr-peak-percentile
confusing for me, I'm not sure if it's supposed to be100
or99.995
(for high quality profile) and what difference does0.005
makes on a scale from1-100
?hdr-contrast-recovery
I guess is for HDR(video)>SDR(display), so I don't know which value is better (I use HDR>HDR)correct-downscaling=yes
I agree with this value since even the DOCS mention it increases quality.linear-downscaling=yes
the only reason I disagree is because I use SSimDownscaler.glsl and it requireslinear-downscaling=no
.sigmoid-upscaling
also confuses me, I actually use FSRCNNX and have no idea what the value do and it's importance.deband
should absolutely beno
(default) in order to be considered "high quality".