After #40, all calls to MLModel.prediction were automatically switched from the async variant to the sync variant since they have the same name but async is only available on 14+. The switch to the sync variant resulted in memory leaks due to resources allocated in prediction never getting released, so apps ran out of memory when transcribing a few minutes of audio.
I added MLModel.asyncPrediction, which uses the async variation when available, which has no memory leak issues. On iOS 16 and macOS 13, the prediction is wrapped in a task which acts as an autoreleasepool. This should fix the issue on both 13 and 14+.
After #40, all calls to
MLModel.prediction
were automatically switched from the async variant to the sync variant since they have the same name but async is only available on 14+. The switch to the sync variant resulted in memory leaks due to resources allocated inprediction
never getting released, so apps ran out of memory when transcribing a few minutes of audio.I added
MLModel.asyncPrediction
, which uses the async variation when available, which has no memory leak issues. On iOS 16 and macOS 13, the prediction is wrapped in a task which acts as an autoreleasepool. This should fix the issue on both 13 and 14+.Tested on: