Open leohuang2013 opened 1 year ago
For "small" anything with less memory than an iPhone 12 will likely crash. iPhone 12 is marginal and it helps if you close all other apps.
Thanks @bjnortier for quick reply. I previously used the code from commit: 09e90680072d8ecdf02eaf21c393218385d2c616
It works perfectly on same iPhone device. Does this means there is much more memory usage since above commit? Is it possible we use same level of memory for CoreML?
When you load a CoreML model it is optimised on the device, hence the "first run on a device may take a while ..." output. Afaik this is an internal operation can cannot be pre-computed (e.g. cannot be optimised on another iPhone and then copied over).
This process requires a lot of memory. So if you compile with CoreML, when the model loads for the first time it will consume a lot of memory and might crash, where before it wouldn't for the same iPhone.
I don't understand the question "Is it possible we use same level of memory for CoreML?"
"When you load a CoreML model it is optimised on the device" - is model optimized saved to local storage, or it is in memory? If answer is the latter one, then every time, I restart app, then it will do optimization again.
"Is it possible we use same level of memory for CoreML?" What I mean is, if normal memory usage for whisper-ggml mode loading is 300+MB, then can we do CoreML model loading with 300+MB also.
If it is impossible, what approximate memory usage for CoreML model loading/optimization?
Followed instruction in README to convert coreml model. Then tested on macOS, it works perfectly.
Then copy the model to SwiftUI example project, followed by adding WHISPER_USE_COREML processor and coreml source code. Then compile and run on device, it crashes with error: Failure Reason: Message from debugger: Terminated due to memory issue
Debugger output:
If run this project on macOS, it works.