patrickloeber / pytorch-chatbot

Simple chatbot implementation with PyTorch.
MIT License
420 stars 334 forks source link

An attempt has been made to start a new process before the current process has finished its bootstrapping phase. #2

Closed nikhilkharade closed 4 years ago

nikhilkharade commented 4 years ago

Python : 3.7 Pytorch : latest

Traceback (most recent call last): File "", line 1, in File "C:\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Python37\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "C:\Python37\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Python37\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="mp_main__") File "C:\Python37\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Python37\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Acer\Documents\GitHub\PyTorch-Chatbot-Basic\train.py", line 97, in for (words, labels) in train_loader: File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter return _MultiProcessingDataLoaderIter(self) File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init w.start() File "C:\Python37\lib\multiprocessing\process.py", line 112, in start Traceback (most recent call last): File "", line 1, in File "C:\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Python37\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "C:\Python37\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Python37\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="__mp_main") File "C:\Python37\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) self._popen = self._Popen(self) File "C:\Python37\lib\runpy.py", line 96, in _run_module_code

  File "C:\Python37\lib\multiprocessing\context.py", line 223, in _Popen

mod_name, mod_spec, pkg_name, script_name) File "C:\Python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Acer\Documents\GitHub\PyTorch-Chatbot-Basic\train.py", line 97, in for (words, labels) in train_loader: File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter return _MultiProcessingDataLoaderIter(self) File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init w.start() File "C:\Python37\lib\multiprocessing\process.py", line 112, in start return _default_context.get_context().Process._Popen(process_obj) File "C:\Python37\lib\multiprocessing\context.py", line 322, in _Popen self._popen = self._Popen(self) File "C:\Python37\lib\multiprocessing\context.py", line 223, in _Popen return Popen(process_obj) File "C:\Python37\lib\multiprocessing\popen_spawn_win32.py", line 46, in init return _default_context.get_context().Process._Popen(process_obj) File "C:\Python37\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Python37\lib\multiprocessing\popen_spawn_win32.py", line 46, in init prep_data = spawn.get_preparation_data(process_obj._name)prep_data = spawn.get_preparation_data(process_obj._name)

File "C:\Python37\lib\multiprocessing\spawn.py", line 143, in get_preparation_data File "C:\Python37\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() _check_not_importing_main() File "C:\Python37\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main

  File "C:\Python37\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main

is not going to be frozen to produce an executable.''') RuntimeErroris not going to be frozen to produce an executable.'''):

    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.RuntimeError

: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Traceback (most recent call last): File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "C:\Python37\lib\multiprocessing\queues.py", line 105, in get raise Empty _queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "train.py", line 97, in for (words, labels) in train_loader: File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 345, in next File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data idx, data = self._get_data() File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data success, data = self._try_get_data() File "C:\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 13208, 10108) exited unexpectedly

patrickloeber commented 4 years ago

try setting num_workers to 0: train_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True, num_workers=0)