Open watty-exclamationMark opened 3 years ago
Hello! I'll have to double check the dataloader as used during testing, it should not be using multiple workers.
For now, if you want to test models, can you try with iNNfer instead? It should be a lot easier to use for producing results with existing models.
21-08-20 15:26:58.678 - INFO: Dataset [SingleDataset - seta] is created. 21-08-20 15:26:58.678 - INFO: Number of test_1 images in [seta]: 100 21-08-20 15:26:58.678 - INFO: Dataset [SingleDataset - setb] is created. 21-08-20 15:26:58.678 - INFO: Number of test_2 images in [setb]: 100 21-08-20 15:26:58.709 - INFO: AMP library available 21-08-20 15:27:03.014 - INFO: Loading pretrained model for G [C:\Users\User\Desktop\traiNNer-master\traiNNer-master\codes\experiments\pretrained_models\4x_RRDB_ESRGAN.pth] 21-08-20 15:27:03.400 - INFO: Network G structure: DataParallel - RRDBNet, with parameters: 16,697,987 21-08-20 15:27:03.400 - INFO: Model [SRModel] created. 21-08-20 15:27:03.400 - INFO: Testing [seta]... Traceback (most recent call last): File "test.py", line 253, in
main()
File "test.py", line 249, in main
test_loop(model, opt, dataloaders, data_params)
File "test.py", line 120, in test_loop
for data in dataloader:
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 359, in iter
return self._get_iterator()
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 918, in init
w.start()
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_totensor..'
C:\Users\User\Desktop\traiNNer-master\traiNNer-master\codes>Traceback (most recent call last): File "", line 1, in
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
The avobe error message come up when I run "python test.py -opt options/sr/test_sr.yml". And I modified the yml to specify the image and model path. But That error message appeared. How can run test.py ? why this error message appear? I don't know how do I run traiNNer...