Closed Orum closed 11 months ago
Hi! Thanks for the first issue ;)
Actually you should be able to pipe if you use -
as input (or output), something like:
avconv -i input.avi -an -f rawvideo -pix_fmt yuv420p - | uvg266 -i - --wpp --threads=8 --input-res=<RESOLUTION> --preset=ultrafast -o - > output.266
should work
Thanks. I realized the issue was actually with it not accepting 10-bit input, though this is not clear in the error message; it simply says Failed to read a frame 1
Needless to say, it'd be nice if the error were a bit more clear as to the nature of the problem. :smile:
ah, that makes sense, internally we are using 8-bit processing and 10-bit YUV input should be accepted (shifted to 8 bits) if you use --input-bitdepth=10
, but some of these have not been tested properly. They are just "inherited" from our Kvazaar encoder where they do work 😅
The input is basically just binary data to us and it's hard to say what it is, but thanks for the feedback!
Ah, so internally it only uses the 8 most significant bits, even for 10-bit input?
I built a new binary after adding ADD_DEFINITIONS(-DUVG_BIT_DEPTH=10)
to the CMakeLists.txt
, and that seems to accept 10-bit input. However, I'm unable to get vvdec to actually decode 10-bit files produced by this. It dies with the error:
ERROR: In function "void vvdec::InputBitstream::read(uint32_t, uint32_t&)" in ./vvdec-1.5.0/source/Lib/CommonLib/BitStream.cpp:283: Exceeded FIFO size
Oh right, it it possible to define that but we have not verified that it would work 🤔 The problem is mostly that all our optimizations are made with 8-bit input in mind (so 1 byte per pixel instead of 2 bytes per pixel). Nice catch anyway, we'll try to figure out why it is not working, might be just that we are putting some wrong info to the headers about the bit depth..
edit:
So basically it should work if you compile it with the UVG_BIT_DEPTH=10
.
Most of the optimizations should be automatically disabled which slows the encoding but should not cause any problems otherwise.
It's possible the issue is with vvdec. Unfortunately, I don't know of another decoder that I could test the output file with, but I'll keep looking.
Thanks for the info!
We use the reference software VTM internally for testing: https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM I have no doubt that vvdec should also work. Then there's also OpenVVC under development: https://github.com/OpenVVC/OpenVVC
You are welcome, thanks for testing things out for us!
Okay, I've discovered the combination of things that causes the decode problem:
UVG_BIT_DEPTH=10
--vaq 5
(the full command I used was uvg266 -i - --input-bitdepth 10 --input-file-format y4m -o uvg-qp36-preset_ultrafast.266 -q 36 --vaq 5 --preset ultrafast --range pc
)vvdec
(e.g. vvdecapp -b uvg-qp36-preset_ultrafast.266 -o - > /dev/null
)This causes the error in vvdec, while encoding with the same parameters without --vaq 5
will decode properly.
Edit: It looks like it also affects 8-bit builds as well (i.e. using --vaq
in them will also produce invalid output).
Thank you for figuring that out! That's one tool we have not yet verified.. We have some of the tests still disabled because we didn't get that far yet, but I think we should try to fix the vaq problem 😅
The issue with vaq is now fixed with 3a0c5b78a3870e4705740847133043e58891576b and 900ce314efe8a1ab68f8b0db629849e50182f932. 10 bit encoding might still have some issues
Where can I get a compiled file for Windows with the ADD_DEFINITIONS(-DUVG_BIT_DEPTH=10) parameter? I tried to create a compilation through VS but I get an error "Could not find a strategy for crc32c_8x8!" when encoding.
As far as I can tell, there is no way to pipe input into the encoder, making it difficult to use. Any chance we could see this in the future?