Jeryi-Sun / SPACES-Pytorch

苏神SPACE pytorch版本复现
MIT License
42 stars 4 forks source link

extract_convert.py中函数parallel_apply出错 #9

Closed xdnjust closed 2 years ago

xdnjust commented 2 years ago

大佬你好, 我跑你的代码extract_convert.py到data = convert(data)这行时出错,好像是多线程的原因?请问应该怎么修改

Building prefix dict from the default dictionary ... Loading model from cache C:\Users\D00477~1\AppData\Local\Temp\jieba.cache Loading model cost 0.541 seconds. Prefix dict has been built successfully. 转换数据: 0%| | 0/4047 [00:00<?, ?it/s]Traceback (most recent call last): File "D:/2. 2021AI项目/5_RPA项目/8_自动摘要生成/5_CAIL2020/1_SPACES_pytorch/extract_convert.py", line 83, in data = convert(data) File "D:/2. 2021AI项目/5_RPA项目/8_自动摘要生成/5_CAIL2020/1_SPACES_pytorch/extract_convert.py", line 69, in convert max_queue_size=200 File "D:\2. 2021AI项目\5_RPA项目\8_自动摘要生成\5_CAIL2020\1_SPACES_pytorch\snippets.py", line 430, in parallel_apply return [d for i, d in generator] File "D:\2. 2021AI项目\5_RPA项目\8_自动摘要生成\5_CAIL2020\1_SPACES_pytorch\snippets.py", line 430, in return [d for i, d in generator] File "D:\2. 2021AI项目\5_RPA项目\8_自动摘要生成\5_CAIL2020\1_SPACES_pytorch\snippets.py", line 503, in parallel_apply_generator pool = Pool(workers, worker_step, (in_queue, out_queue)) File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 119, in Pool context=self.get_context()) File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\pool.py", line 174, in init self._repopulate_pool() File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\pool.py", line 239, in _repopulate_pool w.start() File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py", line 65, in init reduction.dump(process_obj, to_child) File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'parallel_apply_generator..worker_step'

Jeryi-Sun commented 2 years ago

sorry,没有调试并行代码,还是苏神的,可以自己debug试试

xdnjust commented 2 years ago

问了下苏神,解答如下; windows自行在parallel_apply函数里边加上参数dummy=True,Windows的多进程有问题,只能换多线程了。