PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
11.77k stars 2.87k forks source link

[Question]: UTC模型 Python 部署 错误 #5972

Open gggdroa opened 1 year ago

gggdroa commented 1 year ago

请提出你的问题

模型 Python 部署,运行./deploy/python/infer.py报错 出现:段错误 (核心已转储) 这个错误是怎么回事呢

image

LemonNoel commented 1 year ago

可以提供更多信息吗?paddle, paddlenlp 版本,报错前运行的脚本

gggdroa commented 1 year ago

可以提供更多信息吗?paddle, paddlenlp 版本,报错前行的脚

paddlenlp 2.5.2 paddlepaddle-gpu 2.4.2.post117 python3.7

操作模型 Python 部署模块,运行deploy文件中的/python/infer.py 报告错误

zbyzfw commented 1 year ago

我有同样的错误,而且在windows上可以跑通,再linux上不行,切换版本依然不行,但是会报错fastdeploy版本1.0.7 paddlenlp 2.5.2/2.4.0 paddlepaddle-gpu 2.4.2.post117/2.4.0/2.3.2 python3.7/3.8

luoruijie commented 7 months ago

我也遇到后,但是后来把Paddlepaddle的版本升高后,就不报这个错了

goldwater668 commented 1 month ago

我也出现这种情况,环境如下:

paddlenlp                      2.7.2
paddlepaddle-gpu               2.6.0
fast-tokenizer-python          1.0.2
fastapi                        0.110.0
fastdeploy-gpu-python          0.0.0
fastdeploy-tools               0.0.5

报错如下: I0524 15:02:55.838413 1151823 allocator_facade.cc:435] Set default stream to 0x143f19e0 for StreamSafeCUDAAllocator(0xdd50af0) in Place(gpu:0) I0524 15:02:55.838426 1151823 allocator_facade.cc:373] Get Allocator by passing in a default stream I0524 15:02:55.838486 1151823 gpu_info.cc:224] [cudaMalloc] size=0.00244141 MB, result=0 I0524 15:02:55.838553 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838563 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838572 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838580 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838587 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838647 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838654 1151823 gpu_info.cc:224] [cudaMalloc] size=0.000244141 MB, result=0 I0524 15:02:55.838665 1151823 gpu_info.cc:224] [cudaMalloc] size=0.0732422 MB, result=0 I0524 15:02:55.839088 1151823 gpu_info.cc:224] [cudaMalloc] size=0.0288086 MB, result=0 I0524 15:02:55.839371 1151823 gpu_info.cc:224] [cudaMalloc] size=0.0732422 MB, result=0 I0524 15:02:55.839381 1151823 gpu_info.cc:224] [cudaMalloc] size=0.219727 MB, result=0 I0524 15:02:55.857205 1151823 gpu_info.cc:224] [cudaMalloc] size=0.248535 MB, result=0 I0524 15:02:55.859779 1151823 gpu_info.cc:224] [cudaMalloc] size=0.292969 MB, result=0 I0524 15:02:55.860016 1151823 gpu_info.cc:224] [cudaMalloc] size=0.292969 MB, result=0 I0524 15:02:55.860302 1151823 gpu_info.cc:224] [cudaMalloc] size=0.248535 MB, result=0 I0524 15:02:55.861150 1151823 stats.h:79] HostMemoryStatReserved0: Update current_value with 12, after update, current value = 12 I0524 15:02:55.861167 1151823 stats.h:79] HostMemoryStatAllocated0: Update current_value with 12, after update, current value = 12 I0524 15:02:55.861202 1151823 stats.h:79] HostMemoryStatReserved0: Update current_value with 4, after update, current value = 16 I0524 15:02:55.861207 1151823 stats.h:79] HostMemoryStatAllocated0: Update current_value with 4, after update, current value = 16 I0524 15:02:55.861232 1151823 stats.h:79] HostMemoryStatReserved0: Update current_value with 4, after update, current value = 20 I0524 15:02:55.861235 1151823 stats.h:79] HostMemoryStatAllocated0: Update current_value with 4, after update, current value = 20 Segmentation fault (core dumped)

goldwater668 commented 1 month ago

果然升级paddlepaddle跟paddlenlp,fastdeploy就可以了

Irisnotiris commented 1 week ago

果然升级paddlepaddle跟paddlenlp,fastdeploy就可以了

@goldwater668 请问升级到什么版本了呢?

goldwater668 commented 1 week ago

升级到最近的版本即可