qubvel-org / segmentation_models.pytorch

Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
https://smp.readthedocs.io/
MIT License
9.52k stars 1.66k forks source link

run example without cuda #227

Closed ghost closed 4 years ago

ghost commented 4 years ago

Hi I haven't CUDA, then I execute the example: https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/cars%20segmentation%20(camvid).ipynb with CPU: DEVICE = 'CUDA' -< DEVICE = 'cpu' Unfortunately, I've got the following error: `

Epoch: 0 train: 0%| | 0/46 [00:00<?, ?it/s]

BrokenPipeError Traceback (most recent call last)

in 6 7 print('\nEpoch: {}'.format(i)) ----> 8 train_logs = train_epoch.run(train_loader) 9 valid_logs = valid_epoch.run(valid_loader) 10 ~\Anaconda3\lib\site-packages\segmentation_models_pytorch\utils\train.py in run(self, dataloader) 43 44 with tqdm(dataloader, desc=self.stage_name, file=sys.stdout, disable=not (self.verbose)) as iterator: ---> 45 for x, y in iterator: 46 x, y = x.to(self.device), y.to(self.device) 47 loss, y_pred = self.batch_update(x, y) ~\Anaconda3\lib\site-packages\tqdm\std.py in __iter__(self) 1079 """), fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1080 -> 1081 for obj in iterable: 1082 yield obj 1083 # Update and possibly print the progressbar. ~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self) 277 return _SingleProcessDataLoaderIter(self) 278 else: --> 279 return _MultiProcessingDataLoaderIter(self) 280 281 @property ~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader) 717 # before it starts, and __del__ tries to join but will get: 718 # AssertionError: can only join a started process. --> 719 w.start() 720 self._index_queues.append(index_queue) 721 self._workers.append(w) ~\Anaconda3\lib\multiprocessing\process.py in start(self) 110 'daemonic processes are not allowed to have children' 111 _cleanup() --> 112 self._popen = self._Popen(self) 113 self._sentinel = self._popen.sentinel 114 # Avoid a refcycle if the target function holds an indirect ~\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 221 @staticmethod 222 def _Popen(process_obj): --> 223 return _default_context.get_context().Process._Popen(process_obj) 224 225 class DefaultContext(BaseContext): ~\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 320 def _Popen(process_obj): 321 from .popen_spawn_win32 import Popen --> 322 return Popen(process_obj) 323 324 class SpawnContext(BaseContext): ~\Anaconda3\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj) 87 try: 88 reduction.dump(prep_data, to_child) ---> 89 reduction.dump(process_obj, to_child) 90 finally: 91 set_spawning_popen(None) ~\Anaconda3\lib\multiprocessing\reduction.py in dump(obj, file, protocol) 58 def dump(obj, file, protocol=None): 59 '''Replacement for pickle.dump() using ForkingPickler.''' ---> 60 ForkingPickler(file, protocol).dump(obj) 61 62 # BrokenPipeError: [Errno 32] Broken pipe ,` any idea how to manage this issue? Thanks!
JinyuanShao commented 3 years ago

hi, did you solve this problem? I am facing the same one and I am using GPU, how you solve it?

JinyuanShao commented 3 years ago

Oh, I closed the num_workers parameter, it works, because I run code on Windows environment.