Open kunal-vzw opened 5 years ago
It seems that we are unable to 'force' a frequency (or bit depth) when creating an OpenAL audio device (i.e. if I use ALC_FREQUENCY or ALC_FORMAT_TYPE_SOFT, it doesn't matter, the sampling rate/bit depth of the actual hardware audio device is what gets set on the OpenAL device).
The format of the hardware/output is the format that gets set on the OpenAL device. It wouldn't make much sense for an OpenAL device to have a 44100hz sample rate if the actual output needs 48000hz. Not all systems or hardware allow changing it, or may only allow certain sample rates.
ALC_FORMAT_TYPE_SOFT
is only used with loopback devices so it can give you the samples in the format you need. It's ignored/invalid on non-loopback playback devices since the app has no reason to know, and the library itself may not even know what the hardware's truly getting (and even then, the hardware could be taking 24-bit samples from the system, but the DAC may only be 20- or 21-bit or something). Internally OpenAL Soft always works with 32-bit floats so it doesn't have to worry about overflows during mixing. If the output can accept these floats, they're given as-is, otherwise it's dithered and quantized/converted as needed.
You can get the device's sample rate by querying
ALCint srate;
alcGetIntegerv(device, ALC_FREQUENCY, 1, &srate);
after you've created a context or reset the device with alcResetDeviceSOFT
. On loopback devices only, you can get the sample type the same way, query ALC_FORMAT_TYPE_SOFT
and you'll get back the sample type you last set.
Thanks for the answer. I had a follow up question about this part specifically:
Internally OpenAL Soft always works with 32-bit floats so it doesn't have to worry about overflows during mixing. If the output can accept these floats, they're given as-is, otherwise it's dithered and quantized/converted as needed.
Interesting -- currently, my audio engine, when loading for egs. a 16 bit PCM asset (short), converts the shorts to floats and then loads them into an OpenAL Buffer (specifying AL_FORMAT_MONO_FLOAT32
for egs). It does this conversion in the most basic way (in this case for egs, converting a short s
to a float f
is achieved straight up via float f = ((float) s) / (float) 32768
).
Would it make more sense to NOT do this conversion on my end, since OpenAL will be converting it under the hood to a float anyhow, and instead directly feed the buffer with AL_FORMAT_MONO16
? I guess in some sense this is a broader question about how OpenAL's type conversions are happening, and if they are optimized in any way. Because if they are, it would make sense to me to not do the conversions on my end, and always feed samples into OpenAL buffers into whatever format they are in, and allow OpenAL to do all the conversions.
On the other hand, if there's nothing special / optimized happening under the hood in OpenAL when doing these conversions, and it's basically equivalent to me doing the conversion and then passing the float into OpenAL (ie OpenAL doesn't do any conversions when it receives data specified w/ a _FLOAT32
format identifier), then seems like it shouldn't matter either way.
Would it make more sense to NOT do this conversion on my end, since OpenAL will be converting it under the hood to a float anyhow, and instead directly feed the buffer with AL_FORMAT_MONO16? I guess in some sense this is a broader question about how OpenAL's type conversions are happening, and if they are optimized in any way.
They're optimized in the sense that it needs to load the samples into a floating-point buffer prior to resampling anyway, so it can simply do the conversion during that load. If you're creating a second temporary buffer solely to hold the floating point samples, transferring samples from one buffer to another when you otherwise wouldn't have to before passing them to OpenAL, letting OpenAL do it instead may be more efficient. Letting OpenAL do it would also help ensure consistent rescaling, if it needs to convert back for 16-bit output.
Hmm, no, I'm not creating a temporary buffer solely to hold the floating point samples. I have only a single buffer, and I'm either converting to float when storing samples in it, or not. It is this buffer that then gets directly passed to OpenAL.
They're optimized in the sense that it needs to load the samples into a floating-point buffer prior to resampling anyway, so it can simply do the conversion during that load.
But this conversion won't happen if I load float samples into the buffer i.e. with AL_FORMAT_XYZ_FLOAT32
, right?
Another quick question -- I see it's possible to have loopback render output samples in 32 bit int (ALC_INT_SOFT
), but I don't see a corresponding option to load openal buffers with 32 bit int samples (I only see 32 bit float). Is that correct?
But this conversion won't happen if I load float samples into the buffer i.e. with AL_FORMAT_XYZ_FLOAT32, right?
The conversion won't happen, but the internal copy still will.
Another quick question -- I see it's possible to have loopback render output samples in 32 bit int (ALC_INT_SOFT), but I don't see a corresponding option to load openal buffers with 32 bit int samples (I only see 32 bit float). Is that correct?
Correct. You can convert 32-bit int to 32-bit float just before loading though (f32 = (float)i32 / 2147483648.0f
), which is what OpenAL Soft would do when mixing if it was supported.
It seems that we are unable to 'force' a frequency (or bit depth) when creating an OpenAL audio device (i.e. if I use
ALC_FREQUENCY
orALC_FORMAT_TYPE_SOFT
, it doesn't matter, the sampling rate/bit depth of the actual hardware audio device is what gets set on the OpenAL device).So my question is, is there a way for me to query an openal device to figure out the sample rate & bit depth (sample format) that was set? I have the same question for a loopback device (I know w/ loopback we specify sample rate / format, but I am writing a common interface that wraps both hardware and virtual devices, and would like to implement a common getSampleRate / getSampleFormat function).