Closed Yuren-Zhong closed 6 years ago
Can anyone help me with that?
This is the output while I am training overlapped speakers:
Loading dataset list from cache... Found 950 videos for training. Found 50 videos for validation. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= the_input (InputLayer) (None, 75, 360, 288, 3) 0 _________________________________________________________________ zero1 (ZeroPadding3D) (None, 77, 364, 292, 3) 0 _________________________________________________________________ conv1 (Conv3D) (None, 75, 180, 144, 32) 7232 _________________________________________________________________ batc1 (BatchNormalization) (None, 75, 180, 144, 32) 128 _________________________________________________________________ actv1 (Activation) (None, 75, 180, 144, 32) 0 _________________________________________________________________ spatial_dropout3d_1 (Spatial (None, 75, 180, 144, 32) 0 _________________________________________________________________ max1 (MaxPooling3D) (None, 75, 90, 72, 32) 0 _________________________________________________________________ zero2 (ZeroPadding3D) (None, 77, 94, 76, 32) 0 _________________________________________________________________ conv2 (Conv3D) (None, 75, 90, 72, 64) 153664 _________________________________________________________________ batc2 (BatchNormalization) (None, 75, 90, 72, 64) 256 _________________________________________________________________ actv2 (Activation) (None, 75, 90, 72, 64) 0 _________________________________________________________________ spatial_dropout3d_2 (Spatial (None, 75, 90, 72, 64) 0 _________________________________________________________________ max2 (MaxPooling3D) (None, 75, 45, 36, 64) 0 _________________________________________________________________ zero3 (ZeroPadding3D) (None, 77, 47, 38, 64) 0 _________________________________________________________________ conv3 (Conv3D) (None, 75, 45, 36, 96) 165984 _________________________________________________________________ batc3 (BatchNormalization) (None, 75, 45, 36, 96) 384 _________________________________________________________________ actv3 (Activation) (None, 75, 45, 36, 96) 0 _________________________________________________________________ spatial_dropout3d_3 (Spatial (None, 75, 45, 36, 96) 0 _________________________________________________________________ max3 (MaxPooling3D) (None, 75, 22, 18, 96) 0 _________________________________________________________________ time_distributed_1 (TimeDist (None, 75, 38016) 0 _________________________________________________________________ bidirectional_1 (Bidirection (None, 75, 512) 58787328 _________________________________________________________________ bidirectional_2 (Bidirection (None, 75, 512) 1181184 _________________________________________________________________ dense1 (Dense) (None, 75, 28) 14364 _________________________________________________________________ softmax (Activation) (None, 75, 28) 0 ================================================================= Total params: 60,310,524.0 Trainable params: 60,310,140.0 Non-trainable params: 384.0 _________________________________________________________________ Traceback (most recent call last): File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/queues.py", line 268, in _feed send(obj) SystemError: NULL result without error in PyObject_Call Traceback (most recent call last): File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/queues.py", line 268, in _feed send(obj) SystemError: NULL result without error in PyObject_Call Process Process-1: Process Process-2: Traceback (most recent call last): Traceback (most recent call last): File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() self.run() File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/process.py", line 114, in run File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) self._target(*self._args, **self._kwargs) File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/site-packages/keras/engine/training.py", line 607, in data_generator_task File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/site-packages/keras/engine/training.py", line 607, in data_generator_task self.queue.put(generator_output) File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/queues.py", line 101, in put if not self._sem.acquire(block, timeout): KeyboardInterrupt Epoch 0: Curriculum(train: True, sentence_length: -1, flip_probability: 0.5, jitter_probability: 0.05) self.queue.put(generator_output) File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/multiprocessing/queues.py", line 101, in put if not self._sem.acquire(block, timeout): KeyboardInterrupt Epoch 0: Curriculum(train: True, sentence_length: -1, flip_probability: 0.5, jitter_probability: 0.05) Epoch 1/5000 Traceback (most recent call last): File "training/overlapped_speakers/train.py", line 79, in <module> train(run_name, speaker, 0, 5000, 3, 360, 288, 75, 32, 50) File "training/overlapped_speakers/train.py", line 74, in train pickle_safe=True) File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "/home/yurzho/anaconda3/envs/lipnet/lib/python2.7/site-packages/keras/engine/training.py", line 1845, in fit_generator time.sleep(wait_time) KeyboardInterrupt
all problems can be solved by using Anaconda2.
can you help me to make the code read the videos i am having Found 0 videos for training. Found 0 videos for validation.
Can anyone help me with that?
This is the output while I am training overlapped speakers: