Closed levs42 closed 7 months ago
Looks like the issue is related to the skip
parameter of IOAudioRingBuffer
. Here are timestamps of the incoming audio buffer and the calculated skip
:
skip: 0; timeStamp: 23690809352208 skip: 23213643; timeStamp: 23690832566875 skip: 23236309; timeStamp: 23690855804208 skip: 23213601; timeStamp: 23690879018833 skip: 23209143; timeStamp: 23690902229000 skip: 23213601; timeStamp: 23690925443625 skip: 23236309; timeStamp: 23690948680958 skip: 23213643; timeStamp: 23690971895625 skip: 23213642; timeStamp: 23690995110291 skip: 23213643; timeStamp: 23691018324958
As result, this condition inNumberFrames <= ringBuffer.counts
from IOAudioResampler
is always true so audio conversion is called for every resample()
call.
Looks like skip
calculation is the issue. timescale
of the presentationTimeStamp
is 1000000000 when the buffer is .audioApp
.
po sampleBuffer.presentationTimeStamp ▿ CMTime
- value : 44373327429041
- timescale : 1000000000 ▿ flags : CMTimeFlags
- rawValue : 1
- epoch : 0
Comparing to .audioMic
:
po sampleBuffer.presentationTimeStamp ▿ CMTime
- value : 2142259837
- timescale : 48000 ▿ flags : CMTimeFlags
- rawValue : 3
- epoch : 0
Here's a proposed fix. The crash is gone, but I don't hear the app audio. Please take a look.
IOAudioRingBuffer:
let targetSampleTime: CMTimeValue
if sampleBuffer.presentationTimeStamp.timescale == Int32(inputBuffer.format.sampleRate) {
targetSampleTime = sampleBuffer.presentationTimeStamp.value
} else {
targetSampleTime = Int64(Double(sampleBuffer.presentationTimeStamp.value) * inputBuffer.format.sampleRate / Double(sampleBuffer.presentationTimeStamp.timescale))
}
if sampleTime == 0 {
sampleTime = targetSampleTime
}
// ...
skip = max(Int(targetSampleTime - sampleTime), 0)
IOAudioResampler:
if sampleTime == kIOAudioResampler_sampleTime {
let isSampleRateTimescale = sampleBuffer.presentationTimeStamp.timescale == Int32(inSourceFormat.mSampleRate)
if isSampleRateTimescale {
sampleTime = sampleBuffer.presentationTimeStamp.value
} else {
let adjustedSampleTime = Double(sampleBuffer.presentationTimeStamp.value)
* Double(inSourceFormat.mSampleRate) / Double(sampleBuffer.presentationTimeStamp.timescale)
sampleTime = AVAudioFramePosition(adjustedSampleTime)
}
if let outputFormat {
anchor = .init(hostTime: AVAudioTime.hostTime(forSeconds: sampleBuffer.presentationTimeStamp.seconds), sampleTime: sampleTime, atRate: outputFormat.sampleRate)
}
}
Closed as fixed #1385.
Describe the bug
IOAudioResampler.resample()
can't keep up with how fast ReplayKit sample provides.audioApp
buffers. As result,mixer.audioIO.lockQueue
keeps growing and broadcast extension exceeds 50MB RAM. Here's code example to reproduce the issue. I removed connect call, video and audioMic buffers for simplicity.SampleHandler
```swift override open func broadcastStarted(withSetupInfo setupInfo: [String: NSObject]?) { LBLogger.with(HaishinKitIdentifier).level = .info // rtmpConnection.connect(Preference.defaultInstance.uri!, arguments: nil) DispatchQueue.main.asyncAfter(deadline: .now() + 10) { self.isEnabled = true // delay to have time to connect cpu profiler before the crash } } override open func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) { guard isEnabled else { return } switch sampleBufferType { case .video: break case .audioMic: break case .audioApp: if CMSampleBufferDataIsReady(sampleBuffer) { rtmpStream.append(sampleBuffer) } @unknown default: break } } ```Here is resampler's audio formats on iPhone 11 Pro Max, iOS 17.4:
Profiler1
![Screenshot 2024-03-08 at 3 04 29 PM](https://github.com/shogo4405/HaishinKit.swift/assets/129108591/3f74ef70-0f16-436b-9033-d688854e02c6) ![Screenshot 2024-03-08 at 3 07 46 PM](https://github.com/shogo4405/HaishinKit.swift/assets/129108591/b61e8438-aa92-4d00-ad73-676eec6b168f) ![Screenshot 2024-03-08 at 11 29 44 AM](https://github.com/shogo4405/HaishinKit.swift/assets/129108591/bac239df-0e9f-47d9-8925-7e5aea7f40d0)Replacing
bufferList[0].mData?.assumingMemoryBound(to: Int16.self).advanced(by: offset * channelCount).update(repeating: 0, count: numSamples)
withmemset(bufferList[0].mData?.assumingMemoryBound(to: Int16.self).advanced(by: offset * channelCount), 0, numSamples * MemoryLayout<Int16>.size)
makesrender()
twice faster, but it still isn't enough. Fixing this issue will help with #1380.Profiler2
![Screenshot 2024-03-08 at 3 15 53 PM](https://github.com/shogo4405/HaishinKit.swift/assets/129108591/be15ab8e-026e-49b2-abcb-9939575a05cf)To Reproduce
Provided in the description
Expected behavior
To render faster than incoming audio stream
Version
Main branch,
44770d6a0f5ca79355ba6420dafdee70db06e984
Smartphone info.
No response
Additional context
No response
Screenshots
No response
Relevant log output
No response