-
/kind feature
NCNN and MNN inference framework is widely use in arm, Openvino inference framework is widely use in x86 platform. For speed up inference, we will not use origin inference part of ten…
-
As in this code-base direct .`mnn` weight files are provided which are parsed using the custom CPP module and then infered using MNN inference engine, but one of you comments you mentioned of first co…
-
# depth-wise conv
x = torch.einsum('nctv,cvw->nctw', (x, dw_gcn_weight))
# point-wise conv
x = torch.einsum('nctw,cd->ndtw', (x, self.pw_gcn_weight))
these can be replaced by conv with groups?
…
-
// 配置车牌识别参数
configuration.models_path = model_path; // 模型文件夹路径
这个model_path在iOS Frameworks中如何配置呢。
-
`libtorch` is easy to use. However, the size of its shared libraries is large, see below.
![80a](https://user-images.githubusercontent.com/5284924/177512166-780505e6-21b5-4ece-96cb-6e2553ca4ae7.png)
…
-
Mobile Neural Network (MNN), a universal and efficient inference engine tailored to mobile applications. In this paper, the contributions of MNN include:
1. presenting a mechanism called pre-infer…
-
# Platform(Include target platform as well if cross-compiling): MacOS with M2
# Github Version: 2.9.0
it happened when use `release/mnn_2.9.0_macos_x64_arm82_cpu_opencl_metal.zip` and link it w…
-
Hello everyone, here is a code that can convert the weights of darknet to onnx model. Now it can support the conversion of **yolov4**, **yolov4-tiny**, **yolov3**, **yolov3-spp** and **yolov3-tiny** d…
-
## error log | 日志或报错信息 | ログ
I came across a native exception of "A/libc: Fatal signal 6 (SIGABRT)" on an Android mobile device when I used [TengineKit](https://github.com/OAID/TengineKit).
## con…
-
### 🐛 Describe the bug
I tried building PyTorch manually with the Vulkan backend enabled on an x86_64 Ubuntu 20.04 system.
However, when creating a tensor object on a Vulkan device, PyTorch always s…