karpathy / minGPT

A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
MIT License
20.17k stars 2.51k forks source link

Broken Pipe running "'play_char" notebook #12

Open loganriggs opened 4 years ago

loganriggs commented 4 years ago

When running Input 12 in the play_char jupyter notebook, I got the error:

BrokenPipeError Traceback (most recent call last)

in 6 num_workers=4) 7 trainer = Trainer(model, train_dataset, None, tconf) ----> 8 trainer.train() ~\Documents\GitHub\minGPT\mingpt\trainer.py in train(self) 123 for epoch in range(config.max_epochs): 124 --> 125 run_epoch('train') 126 if self.test_dataset is not None: 127 run_epoch('test') ~\Documents\GitHub\minGPT\mingpt\trainer.py in run_epoch(split) 77 78 losses = [] ---> 79 pbar = tqdm(enumerate(loader), total=len(loader)) if is_train else enumerate(loader) 80 for it, (x, y) in pbar: 81 C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self) 277 return _SingleProcessDataLoaderIter(self) 278 else: --> 279 return _MultiProcessingDataLoaderIter(self) 280 281 @property C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader) 717 # before it starts, and __del__ tries to join but will get: 718 # AssertionError: can only join a started process. --> 719 w.start() 720 self._index_queues.append(index_queue) 721 self._workers.append(w) C:\ProgramData\Anaconda3\lib\multiprocessing\process.py in start(self) 110 'daemonic processes are not allowed to have children' 111 _cleanup() --> 112 self._popen = self._Popen(self) 113 self._sentinel = self._popen.sentinel 114 # Avoid a refcycle if the target function holds an indirect C:\ProgramData\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 221 @staticmethod 222 def _Popen(process_obj): --> 223 return _default_context.get_context().Process._Popen(process_obj) 224 225 class DefaultContext(BaseContext): C:\ProgramData\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 320 def _Popen(process_obj): 321 from .popen_spawn_win32 import Popen --> 322 return Popen(process_obj) 323 324 class SpawnContext(BaseContext): C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj) 87 try: 88 reduction.dump(prep_data, to_child) ---> 89 reduction.dump(process_obj, to_child) 90 finally: 91 set_spawning_popen(None) C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py in dump(obj, file, protocol) 58 def dump(obj, file, protocol=None): 59 '''Replacement for pickle.dump() using ForkingPickler.''' ---> 60 ForkingPickler(file, protocol).dump(obj) 61 62 # BrokenPipeError: [Errno 32] Broken pipe
loganriggs commented 4 years ago

After using William Falcon's pull, I still get the same broken pipe as above.

I was able to get it working w/o using apex, which I believe translates to not using GPU (CUDA).

yupaul commented 3 years ago

On Windows, change num_workers=4 to num_workers=0 in the notebook.

nikitozeg commented 2 years ago

hi. guys you are awesone