Closed freedomtowin closed 3 years ago
It works without multiprocessing and turning on GPU memory growth.
I am hitting the same issue with python 3.6.7 and tensorflow 2.4 when running tf2_gnn_train RGCN PPI --max-epochs 10 data/ppi/
For awareness, the issue only affected Windows (owing to multiprocessing implementation, or lack thereof).
Similar error for me, on Windows 10. Python 3.6.5
2021-06-10 18:44:04.837559: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: TypeError: cannot pickle 'generator' object
Traceback (most recent call last):
File "C:\Users\yogesh.kulkarni\AppData\Local\Continuum\anaconda3\envs\tf2\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 789, in get_iterator
return self._iterators[iterator_id]
KeyError: 0
Any resolution available?
Hello, I've been try to run the test script with the following package version. I keep getting an error related to pickling (multiprocessing?) PYTHON=3.7, TENSORFLOW=2.1.0
FULL ERROR MESSAGE:
(ggn) C:\Users\rohan\gated-graph-network>tf2_gnn_train RGCN PPI ppi/ 2020-10-04 17:41:57.115816: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Setting random seed 0. Trying to load task/model-specific default parameters from c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\cli_utils\default_hypers\PPI_RGCN.json ... File found. Dataset default parameters: {'max_nodes_per_batch': 10000, 'add_self_loop_edges': True, 'tie_fwd_bkwd_edges': False} Loading data from ppi/. Loading PPI train data from ppi/. Loading PPI valid data from ppi/. Model default parameters: {'gnn_aggregation_function': 'sum', 'gnn_message_activation_function': 'relu', 'gnn_message_activation_before_aggregation': False, 'gnn_hidden_dim': 16, 'gnn_use_target_state_as_input': False, 'gnn_normalize_by_num_incoming': True, 'gnn_num_edge_MLP_hidden_layers': 0, 'gnn_message_calculation_class': 'RGCN', 'gnn_initial_node_representation_activation': 'tanh', 'gnn_dense_intermediate_layer_activation': 'tanh', 'gnn_num_layers': 4, 'gnn_dense_every_num_layers': 2, 'gnn_residual_every_num_layers': 2, 'gnn_use_inter_layer_layernorm': False, 'gnn_layer_input_dropout_rate': 0.0, 'gnn_global_exchange_mode': 'gru', 'gnn_global_exchange_every_num_layers': 2, 'gnn_global_exchange_weighting_fun': 'softmax', 'gnn_global_exchange_num_heads': 4, 'gnn_global_exchange_dropout_rate': 0.2, 'optimizer': 'Adam', 'learning_rate': 0.001, 'learning_rate_warmup_steps': None, 'learning_rate_decay_steps': None, 'momentum': 0.85, 'rmsprop_rho': 0.98, 'gradient_clip_value': None, 'gradient_clip_norm': None, 'gradient_clip_global_norm': None, 'use_intermediate_gnn_results': False} Model parameters overridden by task/model defaults: {'gnn_num_layers': 4, 'gnn_hidden_dim': 320, 'gnn_use_target_state_as_input': False, 'gnn_normalize_by_num_incoming': True, 'gnn_num_edge_MLP_hidden_layers': 0, 'gnn_layer_input_dropout_rate': 0.1, 'gnn_dense_every_num_layers': 10000, 'gnn_residual_every_num_layers': 10000, 'gnn_global_exchange_every_num_layers': 10000, 'gnn_use_inter_layer_layernorm': False, 'gnn_initial_node_representation_activation': 'tanh', 'gnn_dense_intermediate_layer_activation': 'tanh', 'gnn_message_activation_function': 'ReLU', 'gnn_aggregation_function': 'sum'} 2020-10-04 17:42:14.657442: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-10-04 17:42:14.683044: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 11.00GiB deviceMemoryBandwidth: 573.69GiB/s 2020-10-04 17:42:14.683431: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-10-04 17:42:14.686151: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2020-10-04 17:42:14.688197: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2020-10-04 17:42:14.689052: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2020-10-04 17:42:14.691229: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2020-10-04 17:42:14.693451: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2020-10-04 17:42:14.698189: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2020-10-04 17:42:14.698417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-10-04 17:42:14.698869: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2020-10-04 17:42:14.699306: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 11.00GiB deviceMemoryBandwidth: 573.69GiB/s 2020-10-04 17:42:14.699454: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-10-04 17:42:14.699704: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2020-10-04 17:42:14.700079: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2020-10-04 17:42:14.700351: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2020-10-04 17:42:14.700609: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2020-10-04 17:42:14.700921: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2020-10-04 17:42:14.701170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2020-10-04 17:42:14.701426: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-10-04 17:42:15.235636: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-04 17:42:15.235744: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-10-04 17:42:15.236221: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-10-04 17:42:15.236648: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9528 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5) Dataset parameters: {"max_nodes_per_batch": 8000, "add_self_loop_edges": true, "tie_fwd_bkwd_edges": false} Model parameters: {"gnn_aggregation_function": "sum", "gnn_message_activation_function": "ReLU", "gnn_message_activation_before_aggregation": false, "gnn_hidden_dim": 320, "gnn_use_target_state_as_input": false, "gnn_normalize_by_num_incoming": true, "gnn_num_edge_MLP_hidden_layers": 0, "gnn_message_calculation_class": "RGCN", "gnn_initial_node_representation_activation": "tanh", "gnn_dense_intermediate_layer_activation": "tanh", "gnn_num_layers": 4, "gnn_dense_every_num_layers": 10000, "gnn_residual_every_num_layers": 10000, "gnn_use_inter_layer_layernorm": false, "gnn_layer_input_dropout_rate": 0.1, "gnn_global_exchange_mode": "gru", "gnn_global_exchange_every_num_layers": 10000, "gnn_global_exchange_weighting_fun": "softmax", "gnn_global_exchange_num_heads": 4, "gnn_global_exchange_dropout_rate": 0.2, "optimizer": "Adam", "learning_rate": 0.001, "learning_rate_warmup_steps": null, "learning_rate_decay_steps": null, "momentum": 0.85, "rmsprop_rho": 0.98, "gradient_clip_value": null, "gradient_clip_norm": null, "gradient_clip_global_norm": null, "use_intermediate_gnn_results": false} 2020-10-04 17:42:15.852147: W tensorflow/core/framework/op_kernel.cc:1643] Unknown: AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\data\graph_dataset.py", line 284, in
self.graph_batch_iterator(data_fold)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\dpu_utils\utils\iterators.py", line 149, in init self.__worker_process_inner.start()
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
2020-10-04 17:42:15.852822: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at iterator_ops.cc:941 : Unknown: AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\data\graph_dataset.py", line 284, in
self.graph_batch_iterator(data_fold)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\dpu_utils\utils\iterators.py", line 149, in init self.__worker_process_inner.start()
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
2020-10-04 17:42:15.853081: W tensorflow/core/framework/op_kernel.cc:1643] Unknown: KeyError: 0 Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
KeyError: 0
Traceback (most recent call last): File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\eager\context.py", line 1897, in execution_mode 2020-10-04 17:42:15.855495: W tensorflow/core/framework/op_kernel.cc:1643] Unknown: KeyError: 0 Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
KeyError: 0
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\iterator_ops.py", line 659, in _next_internal 2020-10-04 17:42:15.856780: W tensorflow/core/framework/op_kernel.cc:1643] Unknown: KeyError: 0 Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
KeyError: 0
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\gen_dataset_ops.py", line 2479, in iterator_get_next_sync _ops.raise_from_not_ok_status(e, name) File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\framework\ops.py", line 6606, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnknownError: AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\data\graph_dataset.py", line 284, in
self.graph_batch_iterator(data_fold)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\dpu_utils\utils\iterators.py", line 149, in init self.__worker_process_inner.start()
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "c:\users\rohan\anaconda3\envs\ggn\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\rohan\anaconda3\envs\ggn\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\rohan\Anaconda3\envs\ggn\Scripts\tf2_gnn_train.exe__main.py", line 7, in
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\cli\train.py", line 33, in run
lambda: run_train_from_args(args, hyperdrive_hyperparameter_overrides), args.debug
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\dpu_utils\utils\debughelper.py", line 21, in run_and_debug
func()
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\cli\train.py", line 33, in
lambda: run_train_from_args(args, hyperdrive_hyperparameter_overrides), args.debug
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\cli_utils\training_utils.py", line 153, in run_train_from_args
aml_run=aml_run,
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\cli_utils\trainingutils.py", line 50, in train
, _, initial_valid_results = model.run_one_epoch(valid_data, training=False, quiet=quiet)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\models\graph_task_model.py", line 366, in run_one_epoch
for step, (batch_features, batch_labels) in enumerate(dataset):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\iterator_ops.py", line 630, in next__
return self.next()
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\iterator_ops.py", line 674, in next
return self._next_internal()
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\iterator_ops.py", line 665, in _next_internal
return structure.from_compatible_tensor_list(self._element_spec, ret)
File "c:\users\rohan\anaconda3\envs\ggn\lib\contextlib.py", line 130, in exit
self.gen.throw(type, value, traceback)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\eager\context.py", line 1900, in execution_mode
executor_new.wait()
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\eager\executor.py", line 67, in wait
pywrap_tensorflow.TFE_ExecutorWaitForAllPendingNodes(self._handle)
tensorflow.python.framework.errors_impl.UnknownError: AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 673, in get_iterator return self._iterators[iterator_id]
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\ops\script_ops.py", line 236, in call ret = func(*args)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 789, in generator_py_func values = next(generator_state.get_iterator(iterator_id))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 675, in get_iterator iterator = iter(self._generator(*self._args.pop(iterator_id)))
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\tf2_gnn\data\graph_dataset.py", line 284, in
self.graph_batch_iterator(data_fold)
File "c:\users\rohan\anaconda3\envs\ggn\lib\site-packages\dpu_utils\utils\iterators.py", line 149, in init self.__worker_process_inner.start()
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'DoubleBufferedIterator.init..'
Traceback (most recent call last): File "", line 1, in
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "c:\users\rohan\anaconda3\envs\ggn\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
2020-10-04 17:42:15.906046: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled