-
M2 Pro (12 (8 performance and 4 efficiency) Cores, 32GB RAM, 19 GPU Cores) Performance on macOS Sonoma (14.5). Will try Sequoia next.
-
It would be wonderful if DeepSpeech models could be converted to CoreML, for offline use in apps. Here is documentation to do just that. https://developer.apple.com/documentation/coreml/converting_tra…
-
| | |
| --: | :-- |
| **Name of layer type** | `upsample_bicubic2d` |
| **PyTorch or TensorFlow** | PyTorch |
| **coremltools Version** | 6.1 |
| **PyTorch Version** | 1.12.1 |
| **Impact** | On…
-
**Is your feature request related to a problem? Please describe.**
I would like to be able to export computer-vision models to CoreML that runs on Apple devices.
By using CoreML, we can run models…
-
I'm trying to do some basic integration in an iOS app and wondering why CoreML support is omitted in Package.swift.
-
I have a version of these nodes working via MPS for those with macbooks. On my M1 Pro 32GB it took 60 seconds for 32 frames and 650 seconds for 600 frames. So about 1 second per animation frame.
Re…
-
Hi there I am following instructions to get CoreML working on Apple Silicon M1.
after I get everything going and trying to transcribe the jfk sample, I only get a wrong transcription:
```
[00:0…
-
I have trouble exporting the simple model I created for classifying images of my and my cats, the code runs well through saving the model, but every time I try to export the model by "model.export_cor…
-
my model's shape is (1*3*5568*5568), when the model start to init, error message:
what it means?
E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model usin…
-
My hunch on how to do this (credit to [Matthijs Hollemans](http://www.matthijshollemans.com/) for inspiring this idea in this [Stack Overflow answer](https://stackoverflow.com/questions/61712399/how-t…