MIVRC / SeaNet-PyTorch

This repository is a PyTorch version of "Soft-edge Assisted Network for Single Image Super-Resolution". (IEEE TIP 2020)
27 stars 10 forks source link

Train Problem #1

Closed yinyiyu closed 3 years ago

yinyiyu commented 4 years ago

Thanks for your great work. When I run the code I meet this problem, can you give me some solutions or suggestions. I also found that this code only runs on the pyTorch <1.0.0 version.GPU_ids = 4 in the code is also a problem.

Making model... Preparing loss function: 1.000 L1 [Epoch 1] Learning rate: 1.00e-4 rm: cannot remove 'experiment/SEAN_X2/log.txt': Device or resource busy Making model... Preparing loss function: 1.000 L1 [Epoch 1] Learning rate: 1.00e-4 Traceback (most recent call last): File "", line 1, in Traceback (most recent call last): File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\spawn.py", line 105, in spawn_main File "main.py", line 19, in t.train() File "E:\rqc\github\SeaNet-PyTorch\Train\trainer.py", line 45, in train exitcode = main(fd) for batch, (lr, edge, hr, , idx_scale) in enumerate(self.loader_train): File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\spawn.py", line 114, in _main File "E:\rqc\github\SeaNet-PyTorch\Train\dataloader.py", line 144, in iter prepare(preparation_data) return _MSDataLoaderIter(self) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\spawn.py", line 225, in prepare File "E:\rqc\github\SeaNet-PyTorch\Train\dataloader.py", line 117, in init w.start() _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\process.py", line 105, in start File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path self._popen = self._Popen(self) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\context.py", line 223, in _Popen run_name="mp_main") File "C:\Users\lenovo\Anaconda3\envs\py4\lib\runpy.py", line 263, in run_path return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\context.py", line 322, in _Popen pkg_name=pkg_name, script_name=fname) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\runpy.py", line 96, in _run_module_code return Popen(process_obj) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\popen_spawn_win32.py", line 65, in init mod_name, mod_spec, pkg_name, script_name) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\runpy.py", line 85, in _run_code reduction.dump(process_obj, to_child) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\reduction.py", line 60, in dump exec(code, runglobals) File "E:\rqc\github\SeaNet-PyTorch\Train\main.py", line 19, in ForkingPickler(file, protocol).dump(obj) t.train() BrokenPipeError: [Errno 32] Broken pipe File "E:\rqc\github\SeaNet-PyTorch\Train\trainer.py", line 45, in train for batch, (lr, edge, hr, , idx_scale) in enumerate(self.loader_train): File "E:\rqc\github\SeaNet-PyTorch\Train\dataloader.py", line 144, in iter__ return _MSDataLoaderIter(self) File "E:\rqc\github\SeaNet-PyTorch\Train\dataloader.py", line 117, in init w.start() File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\popen_spawn_win32.py", line 33, in init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "C:\Users\lenovo\Anaconda3\envs\py4\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.