Closed rknightion closed 5 years ago
Describe the bug When trying to run a train with Original, I get "not enough values to unpack" error/crash
This is on the Windows GUI build
Crash report:
09/14/2019 21:16:09 MainProcess _run_1 training_data minibatch DEBUG Loading minibatch generator: (image_count: 644, side: 'b', is_display: False, do_shuffle: True) 09/14/2019 21:16:09 MainProcess training_0 multithreading start DEBUG Started all threads '_run': 2 09/14/2019 21:16:09 MainProcess training_0 _base set_tensorboard DEBUG Enabling TensorBoard Logging 09/14/2019 21:16:09 MainProcess training_0 _base set_tensorboard DEBUG Setting up TensorBoard Logging. Side: a 09/14/2019 21:16:09 MainProcess training_0 _base name DEBUG model name: 'original' 09/14/2019 21:16:09 MainProcess training_0 _base tensorboard_kwargs DEBUG Tensorflow version: [1, 14, 0] 09/14/2019 21:16:09 MainProcess training_0 _base tensorboard_kwargs DEBUG {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0} 09/14/2019 21:16:11 MainProcess training_0 _base set_tensorboard DEBUG Setting up TensorBoard Logging. Side: b 09/14/2019 21:16:11 MainProcess training_0 _base name DEBUG model name: 'original' 09/14/2019 21:16:11 MainProcess training_0 _base tensorboard_kwargs DEBUG Tensorflow version: [1, 14, 0] 09/14/2019 21:16:11 MainProcess training_0 _base tensorboard_kwargs DEBUG {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0} 09/14/2019 21:16:12 MainProcess training_0 _base set_tensorboard INFO Enabled TensorBoard Logging 09/14/2019 21:16:12 MainProcess training_0 _base use_mask DEBUG False 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initializing Samples: model: '<plugins.train.model.original.Model object at 0x000002B964163C88>', use_mask: False, coverage_ratio: 0.6875) 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initialized Samples 09/14/2019 21:16:12 MainProcess training_0 _base use_mask DEBUG False 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initializing Timelapse: model: <plugins.train.model.original.Model object at 0x000002B964163C88>, use_mask: False, coverage_ratio: 0.6875, preview_images: 14, batchers: '{'a': <plugins.train.trainer._base.Batcher object at 0x000002B921FA2AC8>, 'b': <plugins.train.trainer._base.Batcher object at 0x000002B9222574A8>}') 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initializing Samples: model: '<plugins.train.model.original.Model object at 0x000002B964163C88>', use_mask: False, coverage_ratio: 0.6875) 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initialized Samples 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initialized Timelapse 09/14/2019 21:16:12 MainProcess training_0 _base __init__ DEBUG Initialized Trainer 09/14/2019 21:16:12 MainProcess training_0 train load_trainer DEBUG Loaded Trainer 09/14/2019 21:16:12 MainProcess training_0 train run_training_cycle DEBUG Running Training Cycle 09/14/2019 21:16:13 MainProcess training_0 _base generate_preview DEBUG Generating preview 09/14/2019 21:16:13 MainProcess training_0 _base set_preview_feed DEBUG Setting preview feed: (side: 'a') 09/14/2019 21:16:13 MainProcess training_0 _base load_generator DEBUG Loading generator: a 09/14/2019 21:16:13 MainProcess training_0 _base load_generator DEBUG input_size: 64, output_shapes: [(64, 64, 3)] 09/14/2019 21:16:13 MainProcess training_0 training_data __init__ DEBUG Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\Administrator\\Documents\\fs\\fs\\craigex\\craig_alignments.json', 'b': 'C:\\Users\\Administrator\\Documents\\fs\\fs\\robex\\rob_alignments.json'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'mask_type': None, 'coverage_ratio': 0.6875}, landmarks: False, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur': False, 'icnr_init': False, 'conv_aware_init': False, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4}) 09/14/2019 21:16:13 MainProcess training_0 training_data set_mask_class DEBUG Mask class: None 09/14/2019 21:16:13 MainProcess training_0 training_data __init__ DEBUG Initializing ImageManipulation: (input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur': False, 'icnr_init': False, 'conv_aware_init': False, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4}) 09/14/2019 21:16:13 MainProcess training_0 training_data __init__ DEBUG Output sizes: [64] 09/14/2019 21:16:13 MainProcess training_0 training_data __init__ DEBUG Initialized ImageManipulation 09/14/2019 21:16:13 MainProcess training_0 training_data __init__ DEBUG Initialized TrainingDataGenerator 09/14/2019 21:16:13 MainProcess training_0 training_data minibatch_ab DEBUG Queue batches: (image_count: 586, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False) 09/14/2019 21:16:13 MainProcess training_0 multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run', thread_count: 2) 09/14/2019 21:16:13 MainProcess training_0 multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run' 09/14/2019 21:16:13 MainProcess training_0 multithreading start DEBUG Starting thread(s): '_run' 09/14/2019 21:16:13 MainProcess training_0 multithreading start DEBUG Starting thread 1 of 2: '_run_0' 09/14/2019 21:16:13 MainProcess _run_0 training_data minibatch DEBUG Loading minibatch generator: (image_count: 586, side: 'a', is_display: True, do_shuffle: True) 09/14/2019 21:16:13 MainProcess training_0 multithreading start DEBUG Starting thread 2 of 2: '_run_1' 09/14/2019 21:16:13 MainProcess _run_1 training_data minibatch DEBUG Loading minibatch generator: (image_count: 586, side: 'a', is_display: True, do_shuffle: True) 09/14/2019 21:16:13 MainProcess training_0 multithreading start DEBUG Started all threads '_run': 2 09/14/2019 21:16:13 MainProcess training_0 _base set_preview_feed DEBUG Set preview feed. Batchsize: 14 09/14/2019 21:16:14 MainProcess training_0 _base largest_face_index DEBUG 0 09/14/2019 21:16:24 MainProcess training_0 _base compile_sample DEBUG Compiling samples: (side: 'a', samples: 14) 09/14/2019 21:16:24 MainProcess training_0 _base get_sample DEBUG Getting timelapse samples: 'a' 09/14/2019 21:16:24 MainProcess training_0 _base setup DEBUG Setting up timelapse 09/14/2019 21:16:24 MainProcess training_0 _base setup DEBUG Timelapse output set to 'C:\Users\Administrator\Documents\fs\fs\tl' 09/14/2019 21:16:24 MainProcess training_0 utils get_image_paths DEBUG Scanned Folder contains 0 files 09/14/2019 21:16:24 MainProcess training_0 utils get_image_paths DEBUG Returning 0 images 09/14/2019 21:16:24 MainProcess training_0 utils get_image_paths DEBUG Scanned Folder contains 0 files 09/14/2019 21:16:24 MainProcess training_0 utils get_image_paths DEBUG Returning 0 images 09/14/2019 21:16:24 MainProcess training_0 _base set_timelapse_feed DEBUG Setting timelapse feed: (side: 'a', input_images: '[]', batchsize: 0) 09/14/2019 21:16:24 MainProcess training_0 _base load_generator DEBUG Loading generator: a 09/14/2019 21:16:24 MainProcess training_0 _base load_generator DEBUG input_size: 64, output_shapes: [(64, 64, 3)] 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\Administrator\\Documents\\fs\\fs\\craigex\\craig_alignments.json', 'b': 'C:\\Users\\Administrator\\Documents\\fs\\fs\\robex\\rob_alignments.json'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'mask_type': None, 'coverage_ratio': 0.6875}, landmarks: False, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur': False, 'icnr_init': False, 'conv_aware_init': False, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4}) 09/14/2019 21:16:24 MainProcess training_0 training_data set_mask_class DEBUG Mask class: None 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initializing ImageManipulation: (input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur': False, 'icnr_init': False, 'conv_aware_init': False, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4}) 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Output sizes: [64] 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initialized ImageManipulation 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initialized TrainingDataGenerator 09/14/2019 21:16:24 MainProcess training_0 training_data minibatch_ab DEBUG Queue batches: (image_count: 0, batchsize: 0, side: 'a', do_shuffle: False, is_preview, False, is_timelapse: True) 09/14/2019 21:16:24 MainProcess training_0 multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run', thread_count: 2) 09/14/2019 21:16:24 MainProcess training_0 multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run' 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Starting thread(s): '_run' 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Starting thread 1 of 2: '_run_0' 09/14/2019 21:16:24 MainProcess _run_0 training_data minibatch DEBUG Loading minibatch generator: (image_count: 0, side: 'a', is_display: True, do_shuffle: False) 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Starting thread 2 of 2: '_run_1' 09/14/2019 21:16:24 MainProcess _run_1 training_data minibatch DEBUG Loading minibatch generator: (image_count: 0, side: 'a', is_display: True, do_shuffle: False) 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Started all threads '_run': 2 09/14/2019 21:16:24 MainProcess training_0 _base set_timelapse_feed DEBUG Set timelapse feed 09/14/2019 21:16:24 MainProcess training_0 _base set_timelapse_feed DEBUG Setting timelapse feed: (side: 'b', input_images: '[]', batchsize: 0) 09/14/2019 21:16:24 MainProcess training_0 _base load_generator DEBUG Loading generator: b 09/14/2019 21:16:24 MainProcess training_0 _base load_generator DEBUG input_size: 64, output_shapes: [(64, 64, 3)] 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\Administrator\\Documents\\fs\\fs\\craigex\\craig_alignments.json', 'b': 'C:\\Users\\Administrator\\Documents\\fs\\fs\\robex\\rob_alignments.json'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'mask_type': None, 'coverage_ratio': 0.6875}, landmarks: False, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur': False, 'icnr_init': False, 'conv_aware_init': False, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4}) 09/14/2019 21:16:24 MainProcess training_0 training_data set_mask_class DEBUG Mask class: None 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initializing ImageManipulation: (input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur': False, 'icnr_init': False, 'conv_aware_init': False, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4}) 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Output sizes: [64] 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initialized ImageManipulation 09/14/2019 21:16:24 MainProcess training_0 training_data __init__ DEBUG Initialized TrainingDataGenerator 09/14/2019 21:16:24 MainProcess training_0 training_data minibatch_ab DEBUG Queue batches: (image_count: 0, batchsize: 0, side: 'b', do_shuffle: False, is_preview, False, is_timelapse: True) 09/14/2019 21:16:24 MainProcess training_0 multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run', thread_count: 2) 09/14/2019 21:16:24 MainProcess training_0 multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run' 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Starting thread(s): '_run' 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Starting thread 1 of 2: '_run_0' 09/14/2019 21:16:24 MainProcess _run_0 training_data minibatch DEBUG Loading minibatch generator: (image_count: 0, side: 'b', is_display: True, do_shuffle: False) 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Starting thread 2 of 2: '_run_1' 09/14/2019 21:16:24 MainProcess _run_1 training_data minibatch DEBUG Loading minibatch generator: (image_count: 0, side: 'b', is_display: True, do_shuffle: False) 09/14/2019 21:16:24 MainProcess training_0 multithreading start DEBUG Started all threads '_run': 2 09/14/2019 21:16:24 MainProcess training_0 _base set_timelapse_feed DEBUG Set timelapse feed 09/14/2019 21:16:24 MainProcess training_0 _base setup DEBUG Set up timelapse 09/14/2019 21:16:24 MainProcess training_0 multithreading run DEBUG Error in thread (training_0): not enough values to unpack (expected 2, got 0) 09/14/2019 21:16:25 MainProcess MainThread train monitor DEBUG Thread error detected 09/14/2019 21:16:25 MainProcess MainThread train monitor DEBUG Closed Monitor 09/14/2019 21:16:25 MainProcess MainThread train end_thread DEBUG Ending Training thread 09/14/2019 21:16:25 MainProcess MainThread train end_thread CRITICAL Error caught! Exiting... 09/14/2019 21:16:25 MainProcess MainThread multithreading join DEBUG Joining Threads: 'training' 09/14/2019 21:16:25 MainProcess MainThread multithreading join DEBUG Joining Thread: 'training_0' 09/14/2019 21:16:25 MainProcess MainThread multithreading join ERROR Caught exception in thread: 'training_0' Traceback (most recent call last): File "C:\Users\Administrator\faceswap\lib\cli.py", line 128, in execute_script process.process() File "C:\Users\Administrator\faceswap\scripts\train.py", line 98, in process self.end_thread(thread, err) File "C:\Users\Administrator\faceswap\scripts\train.py", line 124, in end_thread thread.join() File "C:\Users\Administrator\faceswap\lib\multithreading.py", line 216, in join raise thread.err[1].with_traceback(thread.err[2]) File "C:\Users\Administrator\faceswap\lib\multithreading.py", line 147, in run self._target(*self._args, **self._kwargs) File "C:\Users\Administrator\faceswap\scripts\train.py", line 149, in training raise err File "C:\Users\Administrator\faceswap\scripts\train.py", line 139, in training self.run_training_cycle(model, trainer) File "C:\Users\Administrator\faceswap\scripts\train.py", line 221, in run_training_cycle trainer.train_one_step(viewer, timelapse) File "C:\Users\Administrator\faceswap\plugins\train\trainer\_base.py", line 211, in train_one_step raise err File "C:\Users\Administrator\faceswap\plugins\train\trainer\_base.py", line 185, in train_one_step self.timelapse.get_sample(side, timelapse_kwargs) File "C:\Users\Administrator\faceswap\plugins\train\trainer\_base.py", line 614, in get_sample self.samples.images[side] = self.batchers[side].compile_timelapse_sample() File "C:\Users\Administrator\faceswap\plugins\train\trainer\_base.py", line 350, in compile_timelapse_sample samples, feed = batch[:2] ValueError: not enough values to unpack (expected 2, got 0) ============ System Information ============ encoding: cp1252 git_branch: master git_commits: f8e0190 update opencv-python gpu_cuda: 8.0 gpu_cudnn: 6.0.21 gpu_devices: GPU_0: Tesla V100-SXM2-16GB gpu_devices_active: GPU_0 gpu_driver: 425.25 gpu_vram: GPU_0: 16258MB os_machine: AMD64 os_platform: Windows-10-10.0.14393-SP0 os_release: 10 py_command: C:\Users\Administrator\faceswap\faceswap.py train -A C:/Users/Administrator/Documents/fs/fs/craigex -ala C:/Users/Administrator/Documents/fs/fs/craigex/craig_alignments.json -B C:/Users/Administrator/Documents/fs/fs/robex -alb C:/Users/Administrator/Documents/fs/fs/robex/rob_alignments.json -m C:/Users/Administrator/Documents/fs/fs/model -t original -bs 96 -it 1000000 -g 1 -s 100 -ss 25000 -tia C:/Users/Administrator/Documents/fs/fs/tl-a -tib C:/Users/Administrator/Documents/fs/fs/tl-b -to C:/Users/Administrator/Documents/fs/fs/tl -ps 50 -L INFO -gui py_conda_version: conda 4.7.11 py_implementation: CPython py_version: 3.6.9 py_virtual_env: True sys_cores: 8 sys_processor: Intel64 Family 6 Model 79 Stepping 1, GenuineIntel sys_ram: Total: 62463MB, Available: 54533MB, Used: 7929MB, Free: 54533MB =============== Pip Packages =============== absl-py==0.7.1 astor==0.8.0 certifi==2019.6.16 cloudpickle==1.2.2 cycler==0.10.0 cytoolz==0.10.0 dask==2.3.0 decorator==4.4.0 fastcluster==1.1.25 ffmpy==0.2.2 gast==0.2.2 grpcio==1.16.1 h5py==2.9.0 imageio==2.5.0 imageio-ffmpeg==0.3.0 joblib==0.13.2 Keras==2.2.4 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 Markdown==3.1.1 matplotlib==2.2.2 mkl-fft==1.0.14 mkl-random==1.0.2 mkl-service==2.3.0 networkx==2.3 numpy==1.16.2 nvidia-ml-py3==7.352.1 olefile==0.46 opencv-python==4.1.1.26 pathlib==1.0.1 Pillow==6.1.0 protobuf==3.8.0 psutil==5.6.3 pyparsing==2.4.2 pyreadline==2.1 python-dateutil==2.8.0 pytz==2019.2 PyWavelets==1.0.3 pywin32==223 PyYAML==5.1.2 scikit-image==0.15.0 scikit-learn==0.21.2 scipy==1.3.1 six==1.12.0 tensorboard==1.14.0 tensorflow==1.14.0 tensorflow-estimator==1.14.0 termcolor==1.1.0 toolz==0.10.0 toposort==1.5 tornado==6.0.3 tqdm==4.32.1 Werkzeug==0.15.5 wincertstore==0.2 wrapt==1.11.2 ============== Conda Packages ============== # packages in environment at C:\ProgramData\Miniconda3\envs\faceswap: # # Name Version Build Channel _tflow_select 2.1.0 gpu absl-py 0.7.1 py36_0 astor 0.8.0 py36_0 blas 1.0 mkl ca-certificates 2019.5.15 1 certifi 2019.6.16 py36_1 cloudpickle 1.2.2 py_0 cudatoolkit 10.0.130 0 cudnn 7.6.0 cuda10.0_0 cycler 0.10.0 py36h009560c_0 cytoolz 0.10.0 py36he774522_0 dask-core 2.3.0 py_0 decorator 4.4.0 py36_1 fastcluster 1.1.25 py36h830ac7b_1000 conda-forge ffmpeg 4.2 h6538335_0 conda-forge ffmpy 0.2.2 pypi_0 pypi freetype 2.9.1 ha9979f8_1 gast 0.2.2 py36_0 grpcio 1.16.1 py36h351948d_1 h5py 2.9.0 py36h5e291fa_0 hdf5 1.10.4 h7ebc959_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 imageio 2.5.0 py36_0 imageio-ffmpeg 0.3.0 py_0 conda-forge intel-openmp 2019.4 245 joblib 0.13.2 py36_0 jpeg 9b hb83a4c4_2 keras 2.2.4 0 keras-applications 1.0.8 py_0 keras-base 2.2.4 py36_0 keras-preprocessing 1.1.0 py_1 kiwisolver 1.1.0 py36ha925a31_0 libmklml 2019.0.5 0 libpng 1.6.37 h2a8f88b_0 libprotobuf 3.8.0 h7bd577a_0 libtiff 4.0.10 hb898794_2 markdown 3.1.1 py36_0 matplotlib 2.2.2 py36had4c4a9_2 mkl 2019.4 245 mkl-service 2.3.0 py36hb782905_0 mkl_fft 1.0.14 py36h14836fe_0 mkl_random 1.0.2 py36h343c172_0 networkx 2.3 py_0 numpy 1.16.2 py36h19fb1c0_0 numpy-base 1.16.2 py36hc3f5095_0 nvidia-ml-py3 7.352.1 pypi_0 pypi olefile 0.46 py36_0 opencv-python 4.1.1.26 pypi_0 pypi openssl 1.1.1d he774522_0 pathlib 1.0.1 py36_1 pillow 6.1.0 py36hdc69c19_0 pip 19.2.2 py36_0 protobuf 3.8.0 py36h33f27b4_0 psutil 5.6.3 py36he774522_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py36h6538335_2 pyreadline 2.1 py36_1 python 3.6.9 h5500b2f_0 python-dateutil 2.8.0 py36_0 pytz 2019.2 py_0 pywavelets 1.0.3 py36h8c2d366_1 pywin32 223 py36hfa6e2cd_1 pyyaml 5.1.2 py36he774522_0 qt 5.9.7 vc14h73c81de_0 scikit-image 0.15.0 py36ha925a31_0 scikit-learn 0.21.2 py36h6288b17_0 scipy 1.3.1 py36h29ff71c_0 setuptools 41.0.1 py36_0 sip 4.19.8 py36h6538335_0 six 1.12.0 py36_0 sqlite 3.29.0 he774522_0 tensorboard 1.14.0 py36he3c9ec2_0 tensorflow 1.14.0 gpu_py36h305fd99_0 tensorflow-base 1.14.0 gpu_py36h55fc52a_0 tensorflow-estimator 1.14.0 py_0 tensorflow-gpu 1.14.0 h0d30ee6_0 termcolor 1.1.0 py36_1 tk 8.6.8 hfa6e2cd_0 toolz 0.10.0 py_0 toposort 1.5 py_3 conda-forge tornado 6.0.3 py36he774522_0 tqdm 4.32.1 py_0 vc 14.1 h0510ff6_4 vs2015_runtime 14.16.27012 hf0eaf9b_0 werkzeug 0.15.5 py_0 wheel 0.33.4 py36_0 wincertstore 0.2 py36h7fe50ca_0 wrapt 1.11.2 py36he774522_0 xz 5.2.4 h2fa13f4_4 yaml 0.1.7 hc54c509_2 zlib 1.2.11 h62dcd97_3 zstd 1.3.7 h508b16e_0 ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized radius: 3.0 passes: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 [detect.s3fd_amd] confidence: 50 batch-size: 8 [detect.s3fd] confidence: 50 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 font: default font_size: 9 --------- train.ini --------- [global] coverage: 68.75 mask_type: none mask_blur: False icnr_init: False conv_aware_init: False subpixel_upscaling: False reflect_padding: False penalized_mask_loss: True loss_function: mae learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4```
You've set inputs for timelapse, and nothing exists in those folders.
Setting timelapse feed: (side: 'b', input_images: '[]', batchsize: 0)
Describe the bug When trying to run a train with Original, I get "not enough values to unpack" error/crash
This is on the Windows GUI build
Crash report: