Closed jiangzemin-xudamin closed 4 months ago
Hi, @jiangzemin-xudamin. Good information that you had your table :).
I didn't set the bitrate do encode via vvenc, I just encoded using QP and tier high profile to make video good quality. I picked Tears of Steel (4K lossless copy from Xiph Media: https://media.xiph.org/tearsofsteel/), which is about 67 GB Y4M compressed using XZ.
I used to encode from uncompressed Y4M to VVC by number of QP value is 35, the preset is slow and it's high profile using tier command, no bitrates used.
Why is it that the larger the resolution of the video, the proportionate increase in video bitrate decreases instead?
Well, because it's efficient and I wanna make sure my test videos for not too high or too low bitrates, so I picked QP value about 35 and decided to put preset slow, also deblocks, some blur if bitrate is low and removes some film grain.
And, how to design the bitrate in order to achieve "visually lossless" compression (the compressed video image must not lose less than 99% of its quality compared to the original video).
I designed only for using QP value, but probably about 18 to 22 of QP could be visually lossless. For the bitrates, you can set example like 8000 kbps for 1080p for visually lossless and it allows for two passes.
Feel free to reply me if you like. :)
Thank you!
Sincerely, Martin Eesmaa
Hi Martin. I've compiled a table of some of the attributes of the video you compressed for the various resolution versions of <<Tears of Steel>>. I'm wondering how you set the bitrate? Why is it that the larger the resolution of the video, the proportionate increase in video bitrate decreases instead?
And, how to design the bitrate in order to achieve "visually lossless" compression (the compressed video image must not lose less than 99% of its quality compared to the original video).