Closed jdgrk9 closed 5 years ago
looks like no memory to operate 16 sub processes. fixed by limiting max processes to 6.
Thanks for the prompt response.
Im guessing i would need to edit that in the SubprocessorBase.py in the utils folder. If so would you be able to point where in the script I would need to put the limit? Im looking through and Im unable to find the relevant section. I am very new to python. I thought i found the right area and tpyed in a ['6'] but all that did was drastically slow down how fast the alignments were loaded and give a whole different error.
i fixed, just update github folder
Got it, the fix worked for me.
Again thanks.
Ive tried this with DF converter and MIAF128 converter. When i select to use the color transfer, the converter stalls and converts only the first handful of images. Then i get a large error print which ill paste down below. I am using a 1080TI with driver version 417.22. My cpu is a Ryzen 7 1800x. When i do not select the color transfer option my converts are successful.
Has anyone else ran into this issue? I have found when using the DF model this feature is very nice for transferring makeup. Hopefully I can get this working as intended.
========================================= Choose mode: (1) hist match, (2) hist match bw, (3) seamless (default), (4) seamless hist match : 1 Masked hist match? [0..1] (default - model choice) : 1 Choose erode mask modifier [-100..100] (default 0) : 20 Choose blur mask modifier [-100..200] (default 0) : 15 Choose output face scale modifier [-50..50] (default 0) : 20 Transfer color from original DST image? [0..1] (default 0) : 1 Degrade color power of final image [0..100] (default 0) : 15 Export png with alpha channel? [0..1] (default 0) : 0 Running converter.
Loading model... ===== Model summary ===== == Model name: DF
== Current epoch: 266132
== Options: == |== batch_size : 32 == |== multi_gpu : False == |== created_vram_gb : 11.0 == Running on: == |== [0 : GeForce GTX 1080 Ti]
Collecting alignments: 100%|███████████████████████████████████████████████████| 18670/18670 [00:06<00:00, 3071.04it/s] Running on CPU0. Running on CPU1. Running on CPU2. Running on CPU3. Running on CPU4. Running on CPU5. Running on CPU6. Running on CPU7. Running on CPU8. Running on CPU9. Running on CPU10. Running on CPU11. Running on CPU12. Running on CPU13. Running on CPU14. Running on CPU15. Converting: 0%| | 0/25251 [00:00<?, ?it/s]no faces found for 00114.png, copying without faces no faces found for 00115.png, copying without faces no faces found for 00116.png, copying without faces no faces found for 00117.png, copying without faces 2018-12-08 00:50:45.675791: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocato2018-1r cpu 2-08 00:50:45.678253: W tensorflow/core/2018-12-08 00:50:45.675793: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_cat cwise_ops_common.cc:70 : Resource exhaustoed:mmon.cc:70 : Resource exhaus OOM when allocatited: OOM when allocating tensor with shape[408960,3] and typeng tensor with shape[408960,3] float on /job:localhostand type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu/replica:0/task:0/device: CPU:0 by allocator cpu2018-12-08 00:50:45.678255: W ten sorflow/core/framework/op_kernel.cc:2018-12-08 00:512730] OP_REQUIR:ES fai4led at cwise_ops_common.cc:70 : Resource exhausted: OOM wh5en allocating tensor with shape[408960,3] and type bool on /job:loc.alhost/replica:0/task:0/device:CPU:0 b6y75793: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when alloca allocator cpu ti2ng tens018-12-08 00:50:45.678255: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIREor with shape[S failed at cwise_ops_common408960,3] and typ.ce bool on /jobc:70 ::localhost/replica:0/ta Resource esxhausted: OOM when allocating tensor with skha:0/device:pe[408960,3] and CPUtype bool on /job:0 by all:localhost/replica:0/task:0/device:CPU:0 by allocator cpuocator cpu2018-12-08 00:50:45. 678267: W tensorflow/core/framework/op_kernel.cc2:018-12-08 00:50:45.675825: W tensorfl1273] OP_REQUIRES failed at cwiow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70se_ops_common.cc:70 : Resource exhausted: : Resource exhaus OOM when allocating tensor with shape[408960,3] anted: OOM when allocadt type float on /jing tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpuo b:localhost/replica:0/task:0/device:CPU:0 by allocator cpu Exception while process data [undefined]: Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call return fn(*args) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/truediv}} = Mul[T=DT_FLOAT, _grappler:ArithmeticOptimizer:MinimizeBroadcasts=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/rgb_to_lab/srgb_to_xyz/truediv_recip, rgb_to_lab/Reshape)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess result = self.onClientProcessData (data) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData image = self.converter.convert_face(image, image_landmarks, self.debug) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 ) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr}) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run run_metadata_ptr) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run feed_dict_tensor, options, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/truediv}} = Mul[T=DT_FLOAT, _grappler:ArithmeticOptimizer:MinimizeBroadcasts=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/rgb_to_lab/srgb_to_xyz/truediv_recip, rgb_to_lab/Reshape)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'rgb_to_lab/srgb_to_xyz/truediv', defined at: File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 308, in rgb_to_lab
rgb_pixels = (srgb_pixels / 12.92 * linear_mask) + (((srgb_pixels + 0.055) / 1.055) 2.4) exponential_mask
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\math_ops.py", line 874, in binary_op_wrapper
return func(x, y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\math_ops.py", line 970, in _truediv_python3
return gen_math_ops.real_div(x, y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6370, in real_div
"RealDiv", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(args, kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/truediv}} = Mul[T=DT_FLOAT, _grappler:ArithmeticOptimizer:MinimizeBroadcasts=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/rgb_to_lab/srgb_to_xyz/truediv_recip, rgb_to_lab/Reshape)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Exception while process data [undefined]: Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call return fn(*args) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/Greater}} = Greater[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess result = self.onClientProcessData (data) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData image = self.converter.convert_face(image, image_landmarks, self.debug) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 ) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr}) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run run_metadata_ptr) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run feed_dict_tensor, options, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/Greater}} = Greater[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'rgb_to_lab/srgb_to_xyz/Greater', defined at: File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 307, in rgb_to_lab
exponential_mask = tf.cast(srgb_pixels > 0.04045, dtype=tf.float32)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 3426, in greater
"Greater", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/Greater}} = Greater[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
2018-12-08 00:50:45.820336: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu 2018-12-08 00:50:45.820376: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu 2018-12-08 00:50:45.820360: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu 2018-12-08 00:50:45.820386: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu 2018-12-08 00:50:45.845414: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwisException while process data [undefined]: Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call return fn(*args) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/add}} = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/add/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess result = self.onClientProcessData (data) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData image = self.converter.convert_face(image, image_landmarks, self.debug) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 ) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr}) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run run_metadata_ptr) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run feed_dict_tensor, options, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/add}} = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/add/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'rgb_to_lab/srgb_to_xyz/add', defined at: File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 308, in rgb_to_lab
rgb_pixels = (srgb_pixels / 12.92 * linear_mask) + (((srgb_pixels + 0.055) / 1.055) 2.4) exponential_mask
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\math_ops.py", line 874, in binary_op_wrapper
return func(x, y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 311, in add
"Add", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(args, kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/add}} = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/add/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
e_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.845425: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu 2018-12-08 00:50:45.845443: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu 2018-12-08 00:50:45.845460: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu Exception while process data [undefined]: Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call return fn(*args) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/LessEqual}} = LessEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess result = self.onClientProcessData (data) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData image = self.converter.convert_face(image, image_landmarks, self.debug) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 ) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr}) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run run_metadata_ptr) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run feed_dict_tensor, options, run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run run_metadata) File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/LessEqual}} = LessEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'rgb_to_lab/srgb_to_xyz/LessEqual', defined at: File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 306, in rgb_to_lab
linear_mask = tf.cast(srgb_pixels <= 0.04045, dtype=tf.float32)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 4336, in less_equal
"LessEqual", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [[{{node rgb_to_lab/srgb_to_xyz/LessEqual}} = LessEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.