syedhali / EZAudio

An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.
Other
4.94k stars 821 forks source link

EZOutput not working with a sample rate of 8000.0 Hz (iPhone 6s and iPhone 6s Plus) #328

Open JoseExposito opened 8 years ago

JoseExposito commented 8 years ago

Hi,

I'm having some issues with EZOutput with a sample rate of 8000.0 Hz.

In my app I'm using OpenCore AMR to transfer audio, which only supports 8000 sample rate. It's a legacy app and unfortunately I can not change the codec.

EZAudio works as expected but on the iPhone 6s and on the iPhone 6s Plus it looks like the speaker does not support 8000.

This is the input AudioStreamBasicDescription the app is using:

static const int kBytesPerSample = sizeof(SInt16);
static const Float64 kSampleRate = 8000.0;

streamFormat_.mFormatID = kAudioFormatLinearPCM;
streamFormat_.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
streamFormat_.mChannelsPerFrame = streamFormat_.mFramesPerPacket = 1;
streamFormat_.mBytesPerPacket = streamFormat_.mBytesPerFrame = kBytesPerSample;
streamFormat_.mBitsPerChannel = 8 * streamFormat_.mBytesPerFrame;
streamFormat_.mSampleRate = kSampleRate;

To work around this issue, I tried to configure EZOutput with that AudioStreamBasicDescription as inputFormat:

self.audioOutput = [EZOutput outputWithDataSource:self inputFormat:streamFormat_];
[self.audioOutput setDevice:[EZAudioDevice currentOutputDevice]];
[self.audioOutput startPlayback];

And try to convert from the 8000Hz input format to something the speaker should support:

self.audioOutput.clientFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:44100.0f];

But I get this error:

> Error: Failed to set input client format on mixer audio unit (-10865)

Maybe clientFormat doesn't work that way?

I have also tried to convert the frame rate manually using AudioConverterFillComplexBuffer:

- (OSStatus)output:(EZOutput *)output shouldFillAudioBufferList:(AudioBufferList *)bufferList withNumberOfFrames:(UInt32)frames timestamp:(const AudioTimeStamp *)timestamp {
        Float64 currentSampleRate = [AVAudioSession sharedInstance].sampleRate;
        NSLog(@"CURRENT SAMPLE RATE: %f Hz", currentSampleRate); // -> 44100.0

        AudioStreamBasicDescription inputDescription = streamFormat_;

        AudioStreamBasicDescription outputDescription;
        memset(&outputDescription, 0, sizeof(outputDescription));
        outputDescription.mFormatID = kAudioFormatLinearPCM;
        outputDescription.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
        outputDescription.mChannelsPerFrame = outputDescription.mFramesPerPacket = 1;
        outputDescription.mBytesPerPacket = outputDescription.mBytesPerFrame = kBytesPerSample;
        outputDescription.mBitsPerChannel = 8 * outputDescription.mBytesPerFrame;
        outputDescription.mSampleRate = currentSampleRate;

        AudioConverterRef audioConverter;
        AudioConverterNew(&inputDescription, &outputDescription, &audioConverter);
        if(!audioConverter) {
            return noErr;
        }

        UInt32 outputBytes = outputDescription.mBytesPerPacket * (inputBufferSize / inputDescription.mBytesPerPacket);
        unsigned char *outputBuffer = (unsigned char*)malloc(outputBytes);
        memset(outputBuffer, 0, outputBytes);

        AudioBufferList outputBufferList;
        outputBufferList.mNumberBuffers = 1;
        outputBufferList.mBuffers[0].mNumberChannels = outputDescription.mChannelsPerFrame;
        outputBufferList.mBuffers[0].mDataByteSize = outputBytes;
        outputBufferList.mBuffers[0].mData = outputBuffer;

        UInt32 outputDataPacketSize = outputBytes / outputDescription.mBytesPerPacket;

        // Fill class members with data that we'll pass into the InputDataProc
        _converter_currentInputDescription = inputDescription;

        _converter_currentBuffer = new AudioBuffer;
        _converter_currentBuffer->mNumberChannels = outputDescription.mChannelsPerFrame;
        _converter_currentBuffer->mDataByteSize = inputBufferSize;
        _converter_currentBuffer->mData = inputBuffer;

        // Convert
        AudioConverterFillComplexBuffer(audioConverter, converterComplexInputDataProc, nil, &outputDataPacketSize, &outputBufferList, NULL );
        AudioConverterDispose(audioConverter);

        // Copy the converted data in `bufferList->mBuffers[0].mData` so it gets played
        bufferList->mBuffers[0].mDataByteSize = outputBufferList.mBuffers[0].mDataByteSize;
        bufferList->mBuffers[0].mNumberChannels = outputBufferList.mBuffers[0].mNumberChannels;
        memcpy(bufferList->mBuffers[0].mData, outputBufferList.mBuffers[0].mData, outputDataPacketSize);

         return noErr;
}

But that workaround again does not work.

Any idea about how to fix this iPhone 6s issue? Thank you very much in advance