StevenBanama / C3AE

C3AE implement
BSD 2-Clause "Simplified" License
87 stars 16 forks source link

warning #41

Open zpge opened 3 years ago

zpge commented 3 years ago

When I ran in the terminal the following command:

python3 nets/test.py -g -white -v -se -m ./model/c3ae_model_v2_fp16_white_se_132_4.208622-0.973,

I got the warning like this:

2021-03-31 04:06:53.668892: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] W0331 04:06:53.669905 139952547956480 training_v2.py:152] Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset. 2021-03-31 04:06:54.276263: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] W0331 04:06:54.277260 139952547956480 training_v2.py:152] Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset.

And the saved video has no bounding box and labels on the images. Can anyone tell me how to fix it?

StevenBanama commented 3 years ago

can you provide your runningtime enviroment

zpge commented 3 years ago

can you provide your runningtime enviroment

python3.7.0 cuda11.0 cudnn8.0 tensorflow 2.1.0 keras 2.3.0

StevenBanama commented 3 years ago

it may work: pip3 install opencv-python-headless https://stackoverflow.com/questions/54297627/qt-could-not-find-the-platform-plugin-cocoa

zpge commented 3 years ago

it may work: pip3 install opencv-python-headless https://stackoverflow.com/questions/54297627/qt-could-not-find-the-platform-plugin-cocoa

Thanks for your reply. But is it really a Qt problem?

StevenBanama commented 3 years ago

it cause by input of video capture.

zpge commented 3 years ago

it cause by input of video capture.

Actually when I test on one image like

python3 nets/test.py -g -white -se -i assets/person.jpg -m ./model/c3ae_model_v2_fp16_white_se_132_4.208622-0.973

there appears the same warning

2021-03-31 06:29:57.085714: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] WARNING: Logging before flag parsing goes to stderr. W0331 06:29:57.089971 140110113683200 training_v2.py:152] Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset. 2021-03-31 06:29:57.118871: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] W0331 06:29:57.119930 140110113683200 training_v2.py:152] Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset. 2021-03-31 06:29:57.150248: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] W0331 06:29:57.151330 140110113683200 training_v2.py:152] Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset. [array([[28.790756]], dtype=float32), array([[0.08825511, 0.0749622 , 0.07520209, 0.10523483, 0.07998684, 0.07542781, 0.10126889, 0.06387439, 0.10822756, 0.06570514, 0.08169784, 0.08015734]], dtype=float32), array([[0.11957219, 0.88042784]], dtype=float32)] W0331 06:29:57.399416 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer W0331 06:29:57.399521 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer W0331 06:29:57.399566 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.loss_scale W0331 06:29:57.399632 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer.beta_1 W0331 06:29:57.399698 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer.beta_2 W0331 06:29:57.399737 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer.decay W0331 06:29:57.399784 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer.learning_rate W0331 06:29:57.399827 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer.iter W0331 06:29:57.399862 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.loss_scale.current_loss_scale W0331 06:29:57.399907 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.loss_scale.good_steps W0331 06:29:57.399953 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer's state 'm' for (root).layer_with_weights-1.gamma W0331 06:29:57.399995 140110113683200 util.py:144] Unresolved object in checkpoint: (root).optimizer.base_optimizer's state 'm' for (root).layer_with_weights-1.beta