-
### Describe the issue
We recently altered our ONNX model used in production in our mobile app to include the preprocessing steps, which were previously done separately prior to inference. Because it…
-
Hi folks,
First off, I want to say amazing work—I'm really impressed by this project!
I've been trying to perform inference on the Mobile Aloha robot in MuJoCo, but I'm encountering an issue: th…
-
用下面的三个命令生成ppocr4的onnx模型后,用cuda providers推理的速度很慢,用cpu providers推理的速度更快一些,请问是什么原因导致的呢?
paddle2onnx命令:
paddle2onnx --model_dir ch_PP-OCRv4_det_infer --model_filename inference.pdmodel --params_file…
-
### Describe the issue
when using yolov8 fp32 onnx model by qnn, it runs successfully in Snapdragon 8 Gen 2 (SM8550 pnone: redme k70),but it run failedly in Snapdragon 8888 (SM8350 phone: realme gt…
-
Dear @xiongzhu666
I wanted to contact you directly via email but did not find it.
I was wondering if you made progress on this subject? We're looking to run this on a mobile device..
Don't hes…
-
### Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
No
### OS Platform and Distribution
Linux Ubuntu 16.04
### Mobile device if the issue happ…
-
### 1. System information
- OS Platform and Distribution: Windows 11
- TensorFlow installation : pip
- TensorFlow library: 2.18.0
### 2. Code
import tensorflow as tf
input_shape = (224,…
-
### 🔎 Search before asking
- [X] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report.
- [X] I have searched the PaddleOCR [Issues](https://…
-
### **Adaptation for MacOS and Mobile Devices**
Given the model's relatively small parameter size and efficient performance, I was wondering if there are any plans to adapt it for MacOS devices with …
-
Hi,
i have trained "ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8" on a custome dataset using collab, the tflite model works fine on collab but loading it in flutter gives totally much less accura…