PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.22k stars 5.58k forks source link

关于from_dlpack #47039

Closed zsy0828 closed 1 week ago

zsy0828 commented 2 years ago

请提出你的问题 Please ask your question

当使用cupy和paddle相结合是,在每个batch中,我都会将cupy的数据通过from_dlpack(a.toDlpack())转换,其中a是cupy.ndarray。 但是这样做会导致内存直接占用爆炸,即在batch结束后,不会自动的删除未使用的部分(我认为是这样的),请问,如何避免出现这种情况,谢谢. 注:pytorch和tensorflow均不会出现这样的情况

paddle-bot[bot] commented 2 years ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

WorgenZhang commented 2 years ago

您好,正在询问相关开发接口人员中,请您稍等。

zsy0828 commented 2 years ago

`import os os.environ['CUDA_VISIBLE_DEVICES'] = '1'

import cupy as cp from paddle.utils.dlpack import from_dlpack import paddle

def test1():

paddle is same as cupy, it's cudaplace number will change by CUDA_VISIBLE_DEVICES

for i in range(1000):
    a = paddle.rand((1000000, 602), dtype=cp.float32)
    print(a.place)

def test2(): with cp.cuda.Device(0):

why !!, dlpack is increasing!!

    for i in range(100):
        a = cp.random.random((1000000, 602), dtype=cp.float32)
        # a = from_dlpack(a.toDlpack())
        b = paddle.Tensor(a)
        print(b.place)
        del b

test2()` test code