KaiyangZhou / deep-person-reid

Torchreid: Deep learning person re-identification in PyTorch.
https://kaiyangzhou.github.io/deep-person-reid/
MIT License
4.24k stars 1.14k forks source link

AttributeError: Can't get attribute 'NewDataset' on <module '__main__' (built-in)> #341

Open kshatadit opened 4 years ago

kshatadit commented 4 years ago

Dear Kaiyang, thank you so much for sharing your repo, I appreciate all the efforts you've taken

I have come across an issue that I am struggling to understand.

I have created a small dataset, which was meant only for testing purpose, 74 images in train with 5 camera ids, 10 in gallery with 4 cam ids and 2 images in query with 2 cam ids. All of these just 2 pids.

The process i followed was:-

  1. Created the NewDataset class and required lists (train, query, gallery), following your instructions:
    
    from __future__ import absolute_import
    from __future__ import print_function
    from __future__ import division

import sys import os import os.path as osp

from torchreid.data import ImageDataset

class NewDataset(ImageDataset): dataset_dir='newimages' def init(self, root='', **kwargs): self.root = osp.abspath(osp.expanduser(root)) self.dataset_dir = osp.join(self.root, self.dataset_dir)

giving path to train dataset

    train = [('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-3.jpg','0','0'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-4.jpg','1','0'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-5.jpg','1','0'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-6.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-7.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-8.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-9.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-10.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-11.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-12.jpg','1','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-13.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-14.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-15.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-16.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-17.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-18.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-19.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-20.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-21.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-22.jpg','0','1'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-23.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-25.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-26.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-27.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-28.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-29.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-30.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-31.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-32.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-33.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-34.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-35.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-36.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-37.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-38.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-39.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-40.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-42.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-43.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-44.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-45.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-46.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-47.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-48.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-49.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-50.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-52.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-53.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-55.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-56.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-57.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-59.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-60.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-61.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-62.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-63.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-65.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-66.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-67.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-68.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-69.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-70.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-71.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-72.jpg','1','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-73.jpg','1','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-74.jpg','1','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-76.jpg','1','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-78.jpg','0','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-79.jpg','0','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-80.jpg','0','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-81.jpg','0','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-82.jpg','0','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-83.jpg','0','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/-85.jpg','0','4')]
    query = [('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/query/-111.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/query/-333.jpg','1','0')]
    gallery = [('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-1.jpg','1','0'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-2.jpg','0','0'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-24.jpg','1','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-41.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-51.jpg','0','2'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-54.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-58.jpg','1','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-64.jpg','0','3'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-75.jpg','1','4'),('C:/Users/Aditya Kshatriya/Capstone/pytorchreid/deep-person-reid/newimages/gallery/-84.jpg','0','4')]
    super(NewDataset, self).__init__(train, query, gallery, **kwargs)

2. Registered the dataset

torchreid.data.register_image_dataset('new_dataset', NewDataset)


3. Created the data manager

datamanager = torchreid.data.ImageDataManager( root='deep-person-reid', sources='new_dataset' )

It gave the appropriate output,

Loaded NewDataset

subset | # ids | # images | # cameras

train | 2 | 74 | 5 query | 2 | 2 | 2 gallery | 2 | 10 | 4

=> Loading test (target) dataset => Loaded NewDataset

subset | # ids | # images | # cameras

train | 2 | 74 | 5 query | 2 | 2 | 2 gallery | 2 | 10 | 4

**** Summary **** source : ['new_dataset1']

source datasets : 1

source ids : 2

source images : 74

source cameras : 5

target : ['new_dataset1']



4. Next i created the required parameters

model = torchreid.models.build_model( name='resnet50', num_classes=datamanager.num_train_pids, loss='softmax', pretrained=True )

model = model.cuda()

optimizer = torchreid.optim.build_optimizer( model, optim='adam', lr=0.0003 )

scheduler = torchreid.optim.build_lr_scheduler( optimizer, lr_scheduler='single_step', stepsize=20 )


5. Next, building the engine

engine = torchreid.engine.ImageSoftmaxEngine( datamanager, model, optimizer=optimizer, scheduler=scheduler, label_smooth=True )

6. Running the engine

engine.run( save_dir='log/resnet50', max_epoch=60, eval_freq=10, print_freq=10, test_only=False )


After running the engine code, i get this error:-

Start training Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "", line 1, in Traceback (most recent call last): File "", line 1, in File "", line 1, in File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 105, in spawn_main File "", line 1, in File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 105, in spawn_main File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) exitcode = _main(fd) File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 115, in _main exitcode = _main(fd) File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 115, in _main File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'NewDataset' on <module 'main' (built-in)> self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'NewDataset' on <module 'main' (built-in)> AttributeError: Can't get attribute 'NewDataset' on <module 'main' (built-in)> AttributeError: Can't get attribute 'NewDataset' on <module 'main' (built-in)> Traceback (most recent call last): File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py", line 511, in _try_get_batch data = self.data_queue.get(timeout=timeout) File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\queue.py", line 178, in get raise Empty _queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "", line 6, in File "C:\Users\Aditya Kshatriya\Capstone\pytorchreid\deep-person-reid\torchreid\engine\engine.py", line 196, in run open_layers=open_layers File "C:\Users\Aditya Kshatriya\Capstone\pytorchreid\deep-person-reid\torchreid\engine\engine.py", line 250, in train for self.batch_idx, data in enumerate(self.train_loader): File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py", line 576, in next idx, batch = self._get_batch() File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py", line 543, in _get_batch success, data = self._try_get_batch() File "C:\Users\Aditya Kshatriya\AppData\Local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py", line 519, in _try_get_batch raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 5920, 2788, 22620, 10548) exited unexpectedly


Also tried using pre trained weights for testing only.
The same error comes even when i try to run with test_only = True in engine.run()

I am not able to understand what exactly the issue is. Tried to search for it elsewhere but was not able to get a solid solution. Can you please look into this and suggest your opinion?
Thanks a ton!!
KaiyangZhou commented 4 years ago

pid and camid are supposed to contain int rather than str, check this https://kaiyangzhou.github.io/deep-person-reid/user_guide.html#use-your-own-dataset

kshatadit commented 4 years ago

You are right! I might have missed that somehow while following the documentation. I changed all the strings to integers. However, after running the engine.run() block i still got the following error:-

##### Evaluating new_dataset (source) #####
Extracting features from query set ...
---------------------------------------------------------------------------
Empty                                     Traceback (most recent call last)
c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py in _try_get_batch(self, timeout)
    510         try:
--> 511             data = self.data_queue.get(timeout=timeout)
    512             return (True, data)

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\queue.py in get(self, block, timeout)
    177                     if remaining <= 0.0:
--> 178                         raise Empty
    179                     self.not_empty.wait(remaining)

Empty: 

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
<ipython-input-11-c191e60e38d6> in <module>
      4     eval_freq=10,
      5     print_freq=10,
----> 6     test_only=True
      7 )

~\Capstone\pytorchreid\deep-person-reid\torchreid\engine\engine.py in run(self, save_dir, max_epoch, start_epoch, print_freq, fixbase_epoch, open_layers, start_eval, eval_freq, test_only, dist_metric, normalize_feature, visrank, visrank_topk, use_metric_cuhk03, ranks, rerank)
    178                 use_metric_cuhk03=use_metric_cuhk03,
    179                 ranks=ranks,
--> 180                 rerank=rerank
    181             )
    182             return

~\Capstone\pytorchreid\deep-person-reid\torchreid\engine\engine.py in test(self, epoch, dist_metric, normalize_feature, visrank, visrank_topk, save_dir, use_metric_cuhk03, ranks, rerank)
    342                 use_metric_cuhk03=use_metric_cuhk03,
    343                 ranks=ranks,
--> 344                 rerank=rerank
    345             )
    346 

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\autograd\grad_mode.py in decorate_no_grad(*args, **kwargs)
     41         def decorate_no_grad(*args, **kwargs):
     42             with self:
---> 43                 return func(*args, **kwargs)
     44         return decorate_no_grad
     45 

~\Capstone\pytorchreid\deep-person-reid\torchreid\engine\engine.py in _evaluate(self, epoch, dataset_name, query_loader, gallery_loader, dist_metric, normalize_feature, visrank, visrank_topk, save_dir, use_metric_cuhk03, ranks, rerank)
    384 
    385         print('Extracting features from query set ...')
--> 386         qf, q_pids, q_camids = _feature_extraction(query_loader)
    387         print('Done, obtained {}-by-{} matrix'.format(qf.size(0), qf.size(1)))
    388 

~\Capstone\pytorchreid\deep-person-reid\torchreid\engine\engine.py in _feature_extraction(data_loader)
    367         def _feature_extraction(data_loader):
    368             f_, pids_, camids_ = [], [], []
--> 369             for batch_idx, data in enumerate(data_loader):
    370                 imgs, pids, camids = self.parse_data_for_eval(data)
    371                 if self.use_gpu:

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
    574         while True:
    575             assert (not self.shutdown and self.batches_outstanding > 0)
--> 576             idx, batch = self._get_batch()
    577             self.batches_outstanding -= 1
    578             if idx != self.rcvd_idx:

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py in _get_batch(self)
    541         elif self.pin_memory:
    542             while self.pin_memory_thread.is_alive():
--> 543                 success, data = self._try_get_batch()
    544                 if success:
    545                     return data

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py in _try_get_batch(self, timeout)
    517             if not all(w.is_alive() for w in self.workers):
    518                 pids_str = ', '.join(str(w.pid) for w in self.workers if not w.is_alive())
--> 519                 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
    520             if isinstance(e, queue.Empty):
    521                 return (False, None)

RuntimeError: DataLoader worker (pid(s) 3720, 7960, 14348, 1180) exited unexpectedly
KaiyangZhou commented 4 years ago

This seems to be the data loader problem.

What do you see when you do this

for batch in datamanager.train_loader:
    imgs = batch[0]
    pids = batch[1]
    camids = batch[2]
    print(imgs.shape, pids.shape, camids.shape)
    print(len(batch))
    break
kshatadit commented 4 years ago

When I do

for batch in datamanager.train_loader:
    imgs = batch[0]
    pids = batch[1]
    camids = batch[2]
    print(imgs.shape, pids.shape, camids.shape)
    print(len(batch))
    break

I get this

BrokenPipeError                           Traceback (most recent call last)
<ipython-input-5-35bdfd5f639f> in <module>
----> 1 for batch in datamanager.train_loader:
      2     imgs = batch[0]
      3     pids = batch[1]
      4     camids = batch[2]
      5     print(imgs.shape, pids.shape, camids.shape)

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
    191 
    192     def __iter__(self):
--> 193         return _DataLoaderIter(self)
    194 
    195     def __len__(self):

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
    467                 #     before it starts, and __del__ tries to join but will get:
    468                 #     AssertionError: can only join a started process.
--> 469                 w.start()
    470                 self.index_queues.append(index_queue)
    471                 self.workers.append(w)

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\multiprocessing\process.py in start(self)
    110                'daemonic processes are not allowed to have children'
    111         _cleanup()
--> 112         self._popen = self._Popen(self)
    113         self._sentinel = self._popen.sentinel
    114         # Avoid a refcycle if the target function holds an indirect

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\multiprocessing\context.py in _Popen(process_obj)
    221     @staticmethod
    222     def _Popen(process_obj):
--> 223         return _default_context.get_context().Process._Popen(process_obj)
    224 
    225 class DefaultContext(BaseContext):

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\multiprocessing\context.py in _Popen(process_obj)
    320         def _Popen(process_obj):
    321             from .popen_spawn_win32 import Popen
--> 322             return Popen(process_obj)
    323 
    324     class SpawnContext(BaseContext):

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
     87             try:
     88                 reduction.dump(prep_data, to_child)
---> 89                 reduction.dump(process_obj, to_child)
     90             finally:
     91                 set_spawning_popen(None)

c:\users\aditya kshatriya\appdata\local\conda\conda\envs\torchreid\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
     58 def dump(obj, file, protocol=None):
     59     '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60     ForkingPickler(file, protocol).dump(obj)
     61 
     62 #

BrokenPipeError: [Errno 32] Broken pipe

The same code works properly for the market1501 dataset though

KaiyangZhou commented 4 years ago

hmm, then that's clear that there is sth wrong with your dataset

it's hard for me to debug in this situation

I'd suggest you check carefully if you did sth wrong somewhere, and have a look at https://github.com/KaiyangZhou/deep-person-reid/blob/master/torchreid/data/datasets/dataset.py#L12