Closed birdie-github closed 1 year ago
Well, AV1 decoding is really much more optimized. The dav1d decoder has more lines of assembly code than Linux kernel. We don't have the knowledge or the resources to put in this kind of work.
To be fair, the best way to improve vvdec right now would be to rewrite it, because the performance is bound by the software structure that we inherited from VTM, even if heavily optimized. And we are not going to do this. We are still working on minor improvements, but don't expect really big leaps for vvdec anytime soon. ffvvc might someday be more efficient than vvdec, but by that time I expect HW support to be fairly broad.
The only bigger thing being worked on right now is the error resilience.
And please leave your political agenda our of technical inquiries.
I never wanted to engage in anything political I just believed probably naively that encoders using patents are easier (in terms of CPU/RAM) to encode/decode than patent-free codecs. I'm really sorry if that's inappropriate here and I'm totally OK if you delete this issue altogether.
Again, my apologies :-(
Thank you very much for your swift response. Really appreciated.
I just believed probably naively that encoders using patents are easier (in terms of CPU/RAM) to encode/decode than patent-free codecs. I'm really sorry if that's inappropriate here and I'm totally OK if you delete this issue altogether. That's actually a solid insight. The MPEG process includes a lot of focus and optimization on reducing implementation cost of hardware decoders. The general target each generation is no more than a 2x increase in decoder complexity, which with Moore's Law is a net reduction in cost compared to the prior technology at launch.
Feedback I've heard is that a VVC decoder takes less incremental mm^2 of a SoC than AV1 does, even though it is a more advanced codec with substantially better compression efficiency. AV1's best funded playback scenario was for web browsers on personal computers, which have quite a lot of single/low-threaded CPU power. An optimum bitstream for a fixed-function implementation can be quite different, and a well-refined codec targeting a wide variety of usage models will be optimized for ease of SW and HW decoder implementation.
I just believed probably naively that encoders using patents are easier (in terms of CPU/RAM) to encode/decode than patent-free codecs. I'm really sorry if that's inappropriate here and I'm totally OK if you delete this issue altogether.
That's actually a solid insight. The MPEG process includes a lot of focus and optimization on reducing implementation cost of hardware decoders. The general target each generation is no more than a 2x increase in decoder complexity, which with Moore's Law is a net reduction in cost compared to the prior technology at launch.
Feedback I've heard is that a VVC decoder takes less incremental mm^2 of a SoC than AV1 does, even though it is a more advanced codec with substantially better compression efficiency. AV1's best funded playback scenario was for web browsers on personal computers, which have quite a lot of single/low-threaded CPU power. An optimum bitstream for a fixed-function implementation can be quite different, and a well-refined codec targeting a wide variety of usage models will be optimized for ease of SW and HW decoder implementation.
Interesting. Thanks for the input.
Currently software VVC decoding using this library is 2-3 times more expensive CPU-wise than AV1 decoding.
Have you enabled/explored all the optimization options, or there are things left to optimize?
I've been under the impression that VVC mustn't be as expensive to decode as AV1 because the latter is patent-free, so some algorithms are not as good as for VVC which contains a ton of patents.