-
Running iOS 18.0 beta 6 on m2 iPad Pro and m1 max Mac is running 14.6.1 but running XCode 16.0 beta 5 (swit-transformers preview 45e5fbb, CompactSlider 1.1.6, Path.swift 1.4.1, swift-argument-parser 1…
-
## Update
It was issue with the tokens file, it was invalid. maybe we can improve the error message?
---
I tried to run tts model on macOS m1 with [examples/tts.rs](https://github.com/…
-
### Describe the issue
When running inference of a specific dynamic-shape image filter model using CoreML EP, output pixels are slightly shifted towards the bottom left of the image. Pixels at the b…
-
### Describe the feature request
WebNN CoreML backend doesn't support int64 data type, however some ops from ONNX produce int64 output, e.g. ArgMax, ArgMin, etc., CoreML's AragMax reproduces int32 …
Honry updated
1 month ago
-
How to convert this model to CoreML supporting model, so that it can be used in iOS application?
-
### Describe the documentation issue
It's not clear what should be the execution providers strings in Python.
Eg. I want to enable DirectML or CoreML. I can see them here https://onnxruntime.ai/doc…
-
Core ml models, never finish on my M1 Pro, finally I've got an error, I couldn't find any relevant information about this on the repo
xcrun: error: unable to find utility "coremlc"
tried restori…
-
I would like to convert the model to CoreML to use it on the iPhone.
But the DSNTNN layer is not supported because of its flip and linspace pytroch operands.
Would it be possible to implement those …
-
### 🚀 The feature, motivation and pitch
I was trying to export some standard torchvision detection models, but export failed due to
```
torch._dynamo.exc.Unsupported: call_function BuiltinVariabl…
-
Hi!
I'm converting the Microsoft's Phi-2 model to use with `swift-transformers`.
The conversion process is actually very seamless:
```
from transformers import AutoTokenizer, AutoModelForCau…