PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.26k stars 5.6k forks source link

Dataloader 似乎没有正常关闭,大量调用后报错 | Dataloader may be not closed in some cases, and calling it many times will cause "Resource temporarily unavailable" #42693

Open holyseven opened 2 years ago

holyseven commented 2 years ago

bug描述 Describe the Bug

DataLoader 没有正常关闭thread,如果大量call DataLoader,会导致最后没有资源了。

实例化后的DataLoader,会去申请资源(应该是开了thread),但是没有正常关闭,导致最后申请不到资源,报错RuntimeError: Resource temporarily unavailable

不确定是否是bug,因为这不是常规操作。但是否有方法避免?

from paddle.io import Dataset, DataLoader
train_loader = DataLoader(train_set, batch_size=2, shuffle=True)
for _ in range(10000):
    for i, e in enumerate(train_loader):
        print(_, i)
        break
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_19543/3388343434.py in <module>
      1 for _ in range(1200):
      2     train_loader = DataLoader(train_set, batch_size=2, shuffle=True)
----> 3     for i, e in enumerate(train_loader):
      4         print(_, i)
      5         break

~/miniconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py in __iter__(self)
    424     def __iter__(self):
    425         if self.num_workers == 0:
--> 426             return _DataLoaderIterSingleProcess(self)
    427         elif self._persistent_workers:
    428             if self._iterator is None:

~/miniconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py in __init__(self, loader)
    168         self._blocking_queue_capacity = 1 * len(self._places)
    169 
--> 170         self._init_thread()
    171         self._shutdown = False
    172 

~/miniconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py in _init_thread(self)
    188             self._blocking_queue, self._var_names, self._shapes, self._dtypes,
    189             self._need_check_feed, self._places, self._use_buffer_reader, True,
--> 190             self._pin_memory)
    191 
    192         self._thread = threading.Thread(

RuntimeError: Resource temporarily unavailable

其他补充信息 Additional Supplementary Information

和#38371 类似。

paddle-bot-old[bot] commented 2 years ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

lyuwenyu commented 2 years ago

ps aux能看到相关的进程嘛,直接kill掉试下

holyseven commented 2 years ago

em... it's not a solution at all.

如果出现这个Resource temporarily unavailable问题的话,terminal里面也无法操作。terminal会有弹出bash: fork: retry: 资源暂时不可用

我现在在jupyter notebook里面运行,只要把jupyter关了或者重启,就可以解决这个问题。但是程序无法继续运行下去。

heavengate commented 2 years ago

dataloader是在一个epoch启动的时候启动线程和子进程,结束的时候释放线程和子进程,如果你代码里中途break出来,挥刀自没法走到结束时释放进程线程的逻辑,你是什么场景需要反复从dataloader for循环里break出来么

holyseven commented 2 years ago

场景:pytorch算法influence function 转paddle,https://github.com/nimarb/pytorch_influence_functions/blob/master/pytorch_influence_functions/influence_function.py#L35-L52

暂时是按照这样解决的:

iters = train_loader.__iter__()
for _ in range(1200):
    for i, e in enumerate(iters):
        print(_, i)
        break