-
### Is there an existing issue for this problem?
- [X] I have searched the existing issues
### Operating system
Linux
### GPU vendor
None (CPU)
### GPU model
AMD Radeon Graphics 5…
-
# 在arm64 windows 电脑上没有识别出结果
脚本:
```bash
@echo off
setlocal
:: 指定模型路径
set MODEL_PATH=C:\\Users\\Admin123\\.cache\\modelscope\\hub\\lovemefan\\SenseVoiceGGUF\\gguf-fp16-sense-voice-small.bin
::…
-
### OpenVINO Version
2024.2.0
### Operating System
Windows System
### Device used for inference
GPU
### Framework
PyTorch
### Model used
Padim
### Issue description
inferencer = OpenVINOInf…
-
**Describe the bug**
When I run the code in this repository using directml, I get the following warnings and error:
2024-08-28 12:12:55.3413627 [W:onnxruntime:onnxruntime-genai, inference_session.…
-
Hiya,
I successfully built clang/llvm & runtime (with cmake) as per the instructions. When I compile the test program I get the error:
./llvm/new-git-cilk-Feb-2-3/build/cilkrt/libcilkrts.so: undefi…
-
If parameter tree_learner in my model.txt is serial, can each tree in this model be predicted using multiple threads?
when I test it, I found only one thread with 100% CPU usage, all the otheer threa…
-
hi there, I am using a 8Gen3(Xiaomi14 Pro 68GB/s bw) and following the Android Cross Compilation Guidance Option.1: Use Prebuilt Kernels guide to test llama-2-7b-4bit token generation performance.
it…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
…
-
**Description**
Our Electra-based model takes about 540 ms per inference on CPU with ONNX Runtime (via the mcr.microsoft.com/azureml/onnxruntime:v1.4.0 container). The same model run through Triton r…