NanoNets / number-plate-detection

Automatic License Plate Reader using tensorflow attention OCR
63 stars 24 forks source link

tensorflow training loss increasing #7

Open prasetyoandi opened 4 years ago

prasetyoandi commented 4 years ago

why training loss increasing so high, just for a second training?

from: INFO:tensorflow:global step 0: loss = 37.9278 (9.969 sec/step) I0401 14:10:33.120939 139834940118848 learning.py:507] global step 0: loss = 37.9278 (9.969 sec/step) INFO:tensorflow:global step 1: loss = 38.2249 (0.345 sec/step) I0401 14:10:33.577449 139834940118848 learning.py:507] global step 1: loss = 38.2249 (0.345 sec/step) INFO:tensorflow:global step 2: loss = 38.1697 (0.359 sec/step) I0401 14:10:33.938303 139834940118848 learning.py:507] global step 2: loss = 38.1697 (0.359 sec/step) INFO:tensorflow:global step 3: loss = 37.3754 (0.355 sec/step) I0401 14:10:34.296173 139834940118848 learning.py:507] global step 3: loss = 37.3754 (0.355 sec/step) INFO:tensorflow:global step 4: loss = 36.9365 (0.374 sec/step) I0401 14:10:34.675701 139834940118848 learning.py:507] global step 4: loss = 36.9365 (0.374 sec/step) INFO:tensorflow:global step 5: loss = 36.6026 (0.367 sec/step) I0401 14:10:35.044912 139834940118848 learning.py:507] global step 5: loss = 36.6026 (0.367 sec/step) INFO:tensorflow:global step 6: loss = 36.3448 (0.374 sec/step) I0401 14:10:35.422319 139834940118848 learning.py:507] global step 6: loss = 36.3448 (0.374 sec/step) INFO:tensorflow:global step 7: loss = 35.3725 (0.370 sec/step) I0401 14:10:35.796007 139834940118848 learning.py:507] global step 7: loss = 35.3725 (0.370 sec/step)

to: INFO:tensorflow:global step 772: loss = 2422.3994 (0.362 sec/step) I0401 14:15:17.393121 139834940118848 learning.py:507] global step 772: loss = 2422.3994 (0.362 sec/step) INFO:tensorflow:global step 773: loss = 2591.4863 (0.362 sec/step) I0401 14:15:17.757555 139834940118848 learning.py:507] global step 773: loss = 2591.4863 (0.362 sec/step) INFO:tensorflow:global step 774: loss = 2726.2822 (0.361 sec/step) I0401 14:15:18.120915 139834940118848 learning.py:507] global step 774: loss = 2726.2822 (0.361 sec/step) INFO:tensorflow:global step 775: loss = 2463.8806 (0.358 sec/step) I0401 14:15:18.481991 139834940118848 learning.py:507] global step 775: loss = 2463.8806 (0.358 sec/step) INFO:tensorflow:global step 776: loss = 1594.3793 (0.358 sec/step) I0401 14:15:18.841962 139834940118848 learning.py:507] global step 776: loss = 1594.3793 (0.358 sec/step) INFO:tensorflow:global step 777: loss = 1444.8801 (0.359 sec/step) I0401 14:15:19.203997 139834940118848 learning.py:507] global step 777: loss = 1444.8801 (0.359 sec/step)

Sid-boi commented 4 years ago

i think your dataset is underfitting pls try with less epochs