argmaxinc / WhisperKit

On-device Speech Recognition for Apple Silicon
https://takeargmax.com/blog/whisperkit
MIT License
3.17k stars 268 forks source link

Resample audio file in chunks to reduce memory usage #16

Closed finnvoor closed 3 months ago

finnvoor commented 7 months ago

https://github.com/argmaxinc/WhisperKit/blob/fed90c7c0727eb9f460d2a1226e0cbd5abf75141/Sources/WhisperKit/Core/AudioProcessor.swift#L197-L217

Creating an AVAudioPCMBuffer for the whole input audio buffer can easily surpass iOS memory limits.

Attempting to transcribe a 44100hz, 2 channel, ~1hr long video crashes on iOS due to running out of memory. It would be nice if instead of reading all the input audio into a buffer at once and converting, the audio was read and converted in chunks to reduce the memory usage.

Another less common issue that would be solved by chunking the audio is that AVAudioPCMBuffer has a max size of UInt32.max, which can be hit when transcribing a 1-2hr, 16 channel, 44100hz audio file. This is a fairly typical audio file for a podcast recorded with a RODECaster Pro.

ZachNagengast commented 7 months ago

Hi @finnvoor totally makes sense thanks for reporting this - there is an option to try that I'll recommend with the current codebase, and a path we could take moving forward I'm curious about your feedback on.

First option would be handling the chunking on the app side by using the transcribe interface that accepts an audioArray:

    public func transcribe(audioArray: [Float],
                           decodeOptions: DecodingOptions? = nil,
                           callback: TranscriptionCallback = nil) async throws -> TranscriptionResult?

Psuedo code for that would look similar to how to do streaming:

  1. Generate a 30s array of samples from the audio file
        var currentSeek = 0
        guard let audioFile = try? AVAudioFile(forReading: URL(string: audioFilePath)!) else { return nil }
        audioFile.framePosition = currentSeek
        let inputBuffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: AVAudioFrameCount(audioFile.fileFormat.sampleRate * 30.0))
        try? audioFile.read(into: inputBuffer!)
  2. Convert it to 16khz 1 channel
        let desiredFormat = AVAudioFormat(
            commonFormat: .pcmFormatFloat32,
            sampleRate: Double(WhisperKit.sampleRate),
            channels: AVAudioChannelCount(1),
            interleaved: false
        )!
        let converter = AVAudioConverter(from: audioFile.processingFormat, to: desiredFormat)
        let audioArray = try? AudioProcessor.resampleBuffer(inputBuffer!, with: converter!)
  3. Transcribe that section and find the last index of the sample we have transcribed so far
        let transcribeResult = try await whisperKit.transcribe(audioArray: audioArray, decodeOptions: options)
        let nextSeek = (transcribeResult?.segments.last?.end)! * Float(WhisperKit.sampleRate)
  4. Restart from step one using that as the new frame position
        audioFile.framePosition = currentSeek + nextSeek

Using this you could generate a multitude of TranscriptionResults and merge them together as they come in. This is similar to how we do streaming in the example app.

As for a new option that would make this easier & built in - there might be a protocol method we'd want to add that simply requests audio from the input file at predefined intervals (like 20s -> 50s, 50s -> 80s) and loads from disk rather than storing it all in memory. That way when we reach the end of the current 30s and update the seek point, we could request the next window from whatever is available on disk, otherwise end the loop.

We have also been thinking about a way to use the "steaming" logic for static audio files from disk (bulk transcription is an upcoming focus for us) so this might be a good way to go to keep the codebase simple, but curious to hear what you think?

finnvoor commented 7 months ago

Thanks for the info! We can definitely split the audio and transcribe in chunks ourselves, but what I like so much about WhisperKit is how it handles all the annoying bits for you, so I think it would be nice if it would split large files automatically.

Ideally we could just pass a URL to any length file and get back a transcript, for our use case we don't need any streaming, but the protocol method could work (just a bit more effort on the client side).

I do think the easiest and simplest way to fix these bugs is just to add a loop in resampleAudio to read the input file in chunks (the input file could easily pass memory limits, but the resampled audio would have to be incredibly long to hit any memory limits), but understand if you want a more general solution.

vade commented 7 months ago

Many moons ago I wrote a pure AVFoundation based CMSampleBuffer decoder which only keeps the 30 seconds of memory buffers available - so you never go above that:

Im unsure if its helpful, but you can find the code where: https://github.com/vade/OpenAI-Whisper-CoreML/blob/feature/RosaKit/Whisper/Whisper/Whisper/Whisper.swift#L361

I lost steam on my Whisper CoreML port, but would be happy to contribute if anything I can add is helpful!

ZachNagengast commented 7 months ago

@vade This looks nice, thanks for sharing!