harveyslash / Facial-Similarity-with-Siamese-Networks-in-Pytorch

Implementing Siamese networks with a contrastive loss for similarity learning
https://hackernoon.com/one-shot-learning-with-siamese-networks-in-pytorch-8ddaab10340e
MIT License
973 stars 274 forks source link

BrokenPipe Error ...Urgent #22

Closed riyaj8888 closed 4 years ago

riyaj8888 commented 5 years ago

when I run following segment of code from your Siamese network code I got this error: Anaconda Environment Python 3.6 windows 10 system:

vis_dataloader = DataLoader(siamese_dataset, shuffle=True, num_workers=8, batch_size=8) dataiter = iter(vis_dataloader)

example_batch = next(dataiter) concatenated = torch.cat((example_batch[0],example_batch[1]),0) imshow(torchvision.utils.make_grid(concatenated)) print(example_batch[2].numpy())


BrokenPipeError Traceback (most recent call last)

in () 3 num_workers=8, 4 batch_size=8) ----> 5 dataiter = iter(vis_dataloader) 6 7 C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self) 817 818 def __iter__(self): --> 819 return _DataLoaderIter(self) 820 821 def __len__(self): C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader) 558 # before it starts, and __del__ tries to join but will get: 559 # AssertionError: can only join a started process. --> 560 w.start() 561 self.index_queues.append(index_queue) 562 self.workers.append(w) C:\Anaconda3\lib\multiprocessing\process.py in start(self) 103 'daemonic processes are not allowed to have children' 104 _cleanup() --> 105 self._popen = self._Popen(self) 106 self._sentinel = self._popen.sentinel 107 _children.add(self) C:\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 221 @staticmethod 222 def _Popen(process_obj): --> 223 return _default_context.get_context().Process._Popen(process_obj) 224 225 class DefaultContext(BaseContext): C:\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 320 def _Popen(process_obj): 321 from .popen_spawn_win32 import Popen --> 322 return Popen(process_obj) 323 324 class SpawnContext(BaseContext): C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj) 63 try: 64 reduction.dump(prep_data, to_child) ---> 65 reduction.dump(process_obj, to_child) 66 finally: 67 set_spawning_popen(None) C:\Anaconda3\lib\multiprocessing\reduction.py in dump(obj, file, protocol) 58 def dump(obj, file, protocol=None): 59 '''Replacement for pickle.dump() using ForkingPickler.''' ---> 60 ForkingPickler(file, protocol).dump(obj) 61 62 # BrokenPipeError: [Errno 32] Broken pipe
parthgoe1 commented 5 years ago

just leave out num_workers argument and you'll be fine. Try this code:

vis_dataloader = DataLoader(siamese_dataset, shuffle=True, batch_size=8) dataiter = iter(vis_dataloader)

example_batch = next(dataiter) concatenated = torch.cat((example_batch[0],example_batch[1]),0) imshow(torchvision.utils.make_grid(concatenated)) print(example_batch[2].numpy())

BubuNunu commented 5 years ago

I add the following line to solve the BrokenPipeError.

if __name__ == "__main__":
    vis_dataloader = DataLoader(siamese_dataset,
                                shuffle=True,
                                num_workers=8,
                                batch_size=8)
    dataiter = iter(vis_dataloader)

    example_batch = next(dataiter)
    concatenated = torch.cat((example_batch[0], example_batch[1]), 0)
    imshow(torchvision.utils.make_grid(concatenated))
    print(example_batch[2].numpy())
Guoyuer commented 5 years ago

All solutions don't work for me. Finally I change all num_workers parameter to 0, it works well

harveyslash commented 4 years ago

I am updating to latest version of pytorch, and changing the dataset. The development can be viewed in the ms-celeb branch