tonysy / Deep-Feature-Flow-Segmentation

Deep Feature Flow for Video Semantic Segmentation
MIT License
33 stars 4 forks source link

ValueError: zero-size array #11

Open wufengbin123 opened 4 years ago

wufengbin123 commented 4 years ago

Can someone help me? I have a problem, I have changed the data set from label to labelTrainId,and start training, After starting the training, the following problem arose:

Epoch[0] Batch [290] Speed: 1.49 samples/sec Train-FCNLogLoss=1.237416, Epoch[0] Batch [300] Speed: 1.51 samples/sec Train-FCNLogLoss=1.209671, Epoch[0] Batch [310] Speed: 1.51 samples/sec Train-FCNLogLoss=1.188998, Epoch[0] Batch [320] Speed: 1.51 samples/sec Train-FCNLogLoss=1.172847, Epoch[0] Batch [330] Speed: 1.51 samples/sec Train-FCNLogLoss=1.152629, Epoch[0] Batch [340] Speed: 1.52 samples/sec Train-FCNLogLoss=1.139912, Epoch[0] Batch [350] Speed: 1.51 samples/sec Train-FCNLogLoss=1.121668, libpng error: Read Error Exception in thread Thread-9: Traceback (most recent call last): File "D:\Software\Anaconda3\envs\FGF\lib\threading.py", line 801, in __bootstrap_inner self.run() File "D:\Software\Anaconda3\envs\FGF\lib\threading.py", line 754, in run self.target(*self.args, **self.__kwargs) File "./experiments/deeplab....\deeplab..\lib\utils\PrefetchingIter.py", line 60, in prefetch_func self.next_batch[i] = self.iters[i].next() File "./experiments/deeplab....\deeplab\core\loader.py", line 188, in next self.get_batch_parallel() File "./experiments/deeplab....\deeplab\core\loader.py", line 237, in get_batch_parallel rst = [multiprocess_result.get() for multiprocess_result in multiprocess_results] File "D:\Software\Anaconda3\envs\FGF\lib\multiprocessing\pool.py", line 572, in get raise self._value ValueError: zero-size array to reduction operation minimum which has no identity

Can someone help me?thanks.