Closed jonnyjohnson1 closed 1 year ago
Hello @jonnyjohnson1!
It looks like you are creating an app for iOS. iOS apps don't have full filesystem access, they can only read files within the app's sandboxed environment. However, it looks to me as if you're trying to load your model from some path in your desktop, which won't be available.
If the model you want to use is static (you have prepared it for your app's use and won't allow the user to select among different models), the easiest way to use it would be to add the .mlpackage
model as a Core ML resource in Xcode. This will compile it for you (so you won't need to do it later), add it to your app's bundle, and automatically generate a class that will allow you to load and use the model using Swift code.
If you don't want to include the model inside your app, then the steps would be something like this:
Please, let us know if that helps!
That clears up the organizational issue I’ve been running into with this. Kept thinking this was a file management problem on my end. I can load static models with an emulator only, but not on the actual device. I am guessing now that is because the .mlpackage needs to be included into the bundle resources.
I do wish to provide a static model to the user, and I like the ease of the tokenizer and generation classes within transformers. Is there a way to load the model from the included CoreML Resource into one of these transformer classes to make generation easy?
Going down the CoreML route, I’ve been writing a few of my own Matrix transformation functions to ensure the dims of the input_ids match the dimensions of the input tensors to plug into the model, and all this seems more complicated than necessary. Though it’s incomplete, I like the work that’s been done already with swift-transformers.
@pcuenca Let's say the model is loaded and included inside the app. Its class is 32FloatModel.
Now this step: "and automatically generate a class that will allow you to load and use the model using Swift code."
Is there a direct path to convert the model class to a LanguageModel class to use the rest of the class as intended.
So instead of: languageModel = try await ModelLoader.load(url: modelURL)
The code would be something like: languageModel = try await ModelLoader.loadFromCoreMLClass(Float32LanguageModel)
Answer:
loading code:
languageModel = try await ModelLoader.loadFromClassURL(url: Float32LanguageModel.urlOfModelInThisBundle)
And here is the function added to the ModelLoader class in ModelLoader.swift file.
static func loadFromClassURL(url: URL?) async throws -> LanguageModel {
return try LanguageModel.loadCompiled(url: url!, computeUnits: .cpuAndGPU)
}
I am having an issue with loading a model package into my app.
It works with the given transformers swift app when I select the package from the folder, but when I import it into my own, it fails. I use the exact same function in mine, as the swift app
I have:
from ContentView.swift
The code fails at this step: from ModelLoader.swift
It fails with this error:
Checks:
I have experimented with many ways of the import and error handling.
Possible Solution: I am thinking it's coming down to a simple setting on the build settings or something like that. I did not get a whole lot of info from the CoreML documentation
I have:
Basic Build Settings