Open LeeYongchao opened 4 years ago
这么久过去了,还是没看到有解决啊
我的解决方式比较粗暴,开子进程跑这个推理,监控这个子进程,爆内存爆显存就杀掉重开
我的也是,显存不大,碰到大点的直接就卡死了,提问了但是现在找不到提问的帖子了,只能来这里蹭热度了,请解决一下呗,不然这个放服务器上是不可能了,只能自己玩
是的,大家都是这个问题!!! hub servering 起来,模型上显卡,占用大概500M 别人请求一下,显存占用4G,返回请求结果, 然后显存不下降。。。。。
你好,请问这个问题解决没?我也遇到了
开子进程跑推理,爆显存就杀掉重开
我当时直接换更大内存显卡
在 2023-10-19 11:20:36,"Eric" @.***> 写道:
这么久过去了,还是没看到有解决啊
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
一个问题搞几年了没解决,还想发展生态呢,逗
内存泄漏的问题,感觉就没有解决啊。这样还怎么成为工业界可用的工具
这是来自QQ邮箱的假期自动回复邮件。 您好,我已收到,我会尽快给您回复的……
版本、环境信息 1)hub.version'1.7.1' 2)paddle.version'1.7.2' 3)ubuntu16.04 4) cuda10 5)paddle.fluid.install_check.run_check() Running Verify Paddle Program ... W0610 10:19:19.803442 8827 device_context.cc:237] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 10.1, Runtime API Version: 9.0 W0610 10:19:19.805452 8827 device_context.cc:245] device: 0, cuDNN Version: 7.6. Your Paddle works well on SINGLE GPU or CPU. I0610 10:19:20.703788 8827 parallel_executor.cc:440] The Program will be executed on CUDA using ParallelExecutor, 1 cards are used, so 1 programs are executed in parallel. I0610 10:19:20.704022 8827 build_strategy.cc:365] SeqOnlyAllReduceOps:0, num_trainers:1 I0610 10:19:20.704203 8827 parallel_executor.cc:307] Inplace strategy is enabled, when build_strategy.enable_inplace = True I0610 10:19:20.704339 8827 parallel_executor.cc:375] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0 Your Paddle works well on MUTIPLE GPU or CPU. Your Paddle is installed successfully! Let's start deep Learning with Paddle now
代码很简单,如下: 首先 export CUDA_VISIBLE_DEVICES=0 import os import paddlehub as hub ocr = hub.Module(name="chinese_ocr_db_crnn_server") img_l=os.listdir('./static/images/') # 本目录有20张带文字的图片,如果您显存大,请多放 for img in imgl: = ocr.recognize_text(paths=['./static/images/'+img], use_gpu=True)
报错如下:
Error Message Summary:
ResourceExhaustedError:
Out of memory error on GPU 0. Cannot allocate 29.883057MB memory on GPU 0, available memory is only 22.750000MB.
Please check whether there is any other process using GPU 0.
If no, please decrease the batch size of your model.
at (/paddle/paddle/fluid/memory/allocation/cuda_allocator.cc:69)