-
Huge fan of machinethink.net. Sorry to ask you this here, but wasn't sure of the best way to do so - you'd recently mentioned that you noticed CoreML inference falls back to GPU if there's a custom la…
-
**Description**
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU.
**Success Criteria**
Utilize both ANE & GPU, not just GPU on Apple Sili…
-
### Duplicates
- [X] I have searched the existing issues
### Summary 💡
**Problem**
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU.
##…
-
Hi, Is it possible to add the Engine options? I would like to use the Neural instead of Standard.
Besides, could you mind update the aws sdk to latest version?
-
I noticed Apple supports ANE Transformers.
According to their own words:
>M1 or newer chip to achieve up to **10 times faster and 14 times lower peak memory**
Does that mean running 30B or 65…
-
**Description**
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU.
**Additional Context**
List of Core ML package format models
https…
-
## 🚀 Feature
Support 16-core Neural Engine in PyTorch
## Motivation
PyTorch should be able to use the Apple 16-core Neural Engine as the backing system.
## Pitch
Since the ARM macs have u…
fire updated
11 months ago
-
我们致力于讲yolov5的模型转化成coreml模型,但是在使用Neural Engine时,我们有返回结果却有报错
报错如下:
doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/F0D65F11-3592-461B-96C7-2E3…
-
### System Info
```shell
Transformers fails with the following error, when trying to use AWQ with TGI / neural compression enginer, or optimum habana
ValueError: AWQ is only available on GPU
```
#…
-
**What is the bug?**
Searching with neural query brings down OS 2.16.0.
This is happening in OS 2.16.0 Image `FROM opensearchproject/opensearch:2.16.0` but not in 2.15.0 or lower
**How to repr…