Closed opentld closed 4 years ago
I got same error.
Try changing num_worker
to 0 in run_imitator.py:191
# num_workers=4
num_workers=0
it probably caused by compatible problem of multiprocessing
in windows
@piaozhx Thanks but now giving out of memory error.
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 8.00 GiB total capacity; 6.18 GiB already allocated; 13.16 MiB free; 159.61 MiB cached)
Thanks! @piaozhx
I tried num_workers=0, but the error is the same
error when running demo_view.py
Personalization: meta cycle finetune... load face model from assets/pretrains/sphere20a_20171020.pth 0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last): File "demo_view.py", line 179, in
generate_orig_pose_novel_view_result(opt, src_path)
File "demo_view.py", line 117, in generate_orig_pose_novel_view_result
adaptive_personalize(opt, viewer, visualizer)
File "E:\SourceCodes\tensorflow\Gans\impersonator-master\run_imitator.py", line 209, in adaptive_personalize
imitator.post_personalize(opt.output_dir, loader, visualizer=None, verbose=False)
File "E:\SourceCodes\tensorflow\Gans\impersonator-master\models\viewer.py", line 395, in post_personalize
for i, sample in enumerate(data_loader):
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'make_dataset..Config'