foo86 / dcadec

DTS Coherent Acoustics decoder with support for HD extensions
118 stars 40 forks source link

dcadec outputs incorrect bitdepth #41

Closed madshi closed 9 years ago

madshi commented 9 years ago

With the following sample dcadec outputs 24bit:

http://madshi.net/incorrectBitdepth.dtshd

My own DTS parser says it should be a 16bit DTS-MA track. And if I simply >> 8 the dcadec output, it's 100% bitwise identical to the ArcSoft decoder output, which is also 16bit.

Nevcairiel commented 9 years ago

The ArcSoft decoder also switches to 24-bit after the first couple audio frames for me. It seems like the stream actually changes there.

madshi commented 9 years ago

Hmmmm... You're right. I had cut off a couple of frames at the beginning and it still looked like 16bit, but now I've cut off the first half of the whole sample, and now eac3to sees it at 24bit, too. So no bug in dcadec, but my fault.

madshi commented 9 years ago

Sorry, I have to reopen this. There's still something weird going on here.

This is the original sample, which starts with a few 16bit frames, then switches to 24bit: http://madshi.net/incorrectBitdepth.dtshd

This is the same sample again, but with the first half of the data removed, so that it starts directly as 24bit: http://madshi.net/16or24.dtshd

For the first sample I get:

DTS Master Audio, 5.1 channels, 16 bits, 48kHz libDcaDec is outputting 24bit instead of the expected 16bit data. Original audio track, L+R+C+SL+SR: constant bit depth of 16 bits. Original audio track, LFE: no audio data.

For the 2nd sample I get:

DTS Master Audio, 5.1 channels, 24 bits, 48kHz Original audio track, L+R+C+SL+SR: max 24 bits, average 16 bits. Original audio track, LFE: no audio data.

When forcing eac3to to 24bit processing for the first sample I get this:

DTS Master Audio, 5.1 channels, 16 bits, 48kHz Original audio track, L+R+C+SL+SR: constant bit depth of 16 bits. Original audio track, LFE: no audio data. Superfluous zero bytes detected, will be stripped in 2nd pass.

Conclusion: 1) When decoding the full sample, dcadec switches from 16bit output to 24bit output, but the 24bit output has all lower bytes set to zero. 2) When decoding only the 2nd half of the sample, dcadec outputs 24bit from the get go, and while most samples have the lower bytes set to zero, some samples have full 24bit data.

That seems strange, doesn't it? Why are there true 24bit samples when decoding 2) but only 16bit worth of data when decoding 1)?

foo86 commented 9 years ago

That seems strange, doesn't it? Why are there true 24bit samples when decoding 2) but only 16bit worth of data when decoding 1)?

The difference is that the first sample was taken from the beginning of the stream, while the second sample was cut at random point. Most (all?) frames in the first case decode losslessly (LSBs are zero), while in second case several initial frames are not lossless (LSBs are non-zero) due to the history effects of core decoder.

Actually, this sample made me realize that one frame is not enough for core decoder to start producing output consistent with XLL residual when decoding starts from random point in the stream like this. Due to ADPCM used in subbands errors will propagate into multiple frames that follow (unpredictable how many). DTS is a terrible format, really.

MarcusJohnson91 commented 9 years ago

"DTS is a terrible format, really." FACT

madshi commented 9 years ago

Ah ok, thanks. So if nothing can/needs to be changed, this issue can be closed.