-
虚拟环境paddle-gpu和paddlex版本为3.0.0-b1
![image](https://github.com/user-attachments/assets/0f017141-1e37-4c97-a213-121423081106)
用paddlex命令行运行印章识别时报错:
![image](https://github.com/user-attachments/assets…
-
1、使用paddle2.6环境导出的inference模型和使用paddle2.3环境导出的inference模型,他们在使用trt加速的时候(基于paddleetection2.3代码),性能差异大吗?
2、同一个paddle2.3版本的inference模型(paddle2.6训练,paddle2.3导出),在1050ti(CUDA 10.2\CUDNN 7.6.5\trt7.0.0…
-
For now that users currently have many version management issues when using PaddleSpeech, given some suggestions for using different versions are provided here.
PaddleSpeech == develop --> PaddlePa…
zxcd updated
4 months ago
-
- [x] 部署文档
- [ ] ONNX部署
- [x] MaskRCNN
- [ ] 结果的json输出
- [ ] jetson编译部署
- [ ] TensorRT
- [ ] 性能测试
- [ ] Windows删除USE_STATIC_LIB选项
- [ ] PaddleServing部署
- [ ] Triton原生与基于ONNX的对比情况
- [x] 部分代码的…
-
### Context
The current PaddlePaddle quantization implementation is different from ONNX,.
#### Same
- PaddlePaddle translates `quantize_linear` and `dequantize_linear` in the paddle frontend.…
-
训练好的pointrend模型使用官方的额export_onnx.py导出onnx发现在np.testing.assert_allclose(onnx_out, paddle_out, rtol=0, atol=1e-03)报错,代码如下,也使用了paddle2onnx来导出模型实测发现精度不一致
def export_onnx(args):
args.config = '/workspa…
-
## 问题描述
SwinTransformer模型转换失败,使用本仓库转换代码以及模型
运行环境,aarch64, torch是cpu版本,2.5.1和1.10.2都失败,同样的报错
PaddlePaddle = 2.6.1:
X2Paddle = 1.5.0
- 错误信息
- warnings.warn(
Fail to generate inference model!…
-
执行 jieba.enable_paddle() 报错
AssertionError: In PaddlePaddle 2.x, we turn on dynamic graph mode by default, and 'data()' is only supported in static graph mode. So if you want to use this api, pleas…
-
registry.baidubce.com/paddlepaddle/paddle 2.6.1 2f3e2fc3a97c 7 weeks ago 4.66GB
registry.baidubce.com/paddlepaddle/paddle 2.4.0-cpu …
-
# Steps to reproduce
```
git clone https://github.com/PaddlePaddle/PaddleOCR
git checkout e621d034b584fae03c22cd51c26f6a52d62417b6
cd PaddleOCR
mkdir -p inference
wget https://paddleocr.bj.b…