Open MiyazawaKohei opened 4 years ago
The accuracy dose not increase in mnist example, while the loss decreases. How should we settle it?
2019-07-27 08:46:04.766 [run-main-0] INFO MNIST Data Loader - Extracting images from file 'datasets/MNIST/train-images-idx3-ubyte.gz'. 2019-07-27 08:46:06.639 [run-main-0] INFO MNIST Data Loader - Extracting labels from file 'datasets/MNIST/train-labels-idx1-ubyte.gz'. 2019-07-27 08:46:06.660 [run-main-0] INFO MNIST Data Loader - Extracting images from file 'datasets/MNIST/t10k-images-idx3-ubyte.gz'. 2019-07-27 08:46:06.830 [run-main-0] INFO MNIST Data Loader - Extracting labels from file 'datasets/MNIST/t10k-labels-idx1-ubyte.gz'. 2019-07-27 08:46:06.837 [run-main-0] INFO MNIST Data Loader - Finished loading the MNIST dataset. 2019-07-27 08:46:07.306 [run-main-0] INFO Examples / MNIST - Building the logistic regression model. 2019-07-27 08:46:07.510 [run-main-0] INFO Examples / MNIST - Training the linear regression model. 2019-07-27 08:46:08.028791: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 2019-07-27 08:46:16.223 [run-main-0] INFO Learn / Hooks / Checkpoint Saver - Saving checkpoint for step 0. 2019-07-27 08:46:16.234 [run-main-0] INFO Variables / Saver - Saving parameters to '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-0'. 2019-07-27 08:46:17.021 [run-main-0] INFO Variables / Saver - Saved parameters to '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-0'. 2019-07-27 08:46:17.032 [run-main-0] INFO Learn / Hooks / Loss Logger - ( N/A ) Step: 0, Loss: 496006.0625 2019-07-27 08:46:17.249 [run-main-0] INFO Variables / Saver - Restoring parameters from '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-0'. 2019-07-27 08:46:17.471 [run-main-0] INFO Learn / Hooks / Evaluation - Step 0 Evaluator: 2019-07-27 08:46:17.479 [run-main-0] INFO Learn / Hooks / Evaluation - ╔═══════╤════════════╗ 2019-07-27 08:46:17.479 [run-main-0] INFO Learn / Hooks / Evaluation - ║ │ Accuracy ║ 2019-07-27 08:46:17.480 [run-main-0] INFO Learn / Hooks / Evaluation - ╟───────┼────────────╢ 2019-07-27 08:46:21.569 [run-main-0] INFO Learn / Hooks / Evaluation - ║ Train │ 0.0848 ║ 2019-07-27 08:46:22.327 [run-main-0] INFO Learn / Hooks / Evaluation - ║ Test │ 0.0805 ║ 2019-07-27 08:46:22.336 [run-main-0] INFO Learn / Hooks / Evaluation - ╚═══════╧════════════╝ 2019-07-27 08:46:25.165 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 8.134 s) Step: 100, Loss: 4524.1426 2019-07-27 08:46:27.622 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.456 s) Step: 200, Loss: 1435.9475 2019-07-27 08:46:29.908 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.287 s) Step: 300, Loss: 752.1824 2019-07-27 08:46:32.760 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.852 s) Step: 400, Loss: 335.2845 2019-07-27 08:46:35.205 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.445 s) Step: 500, Loss: 298.3430 2019-07-27 08:46:37.642 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.437 s) Step: 600, Loss: 256.3721 2019-07-27 08:46:39.741 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.098 s) Step: 700, Loss: 141.6279 2019-07-27 08:46:41.965 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.224 s) Step: 800, Loss: 50.8881 2019-07-27 08:46:44.184 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.220 s) Step: 900, Loss: 19.6008 2019-07-27 08:46:46.435 [run-main-0] INFO Learn / Hooks / Checkpoint Saver - Saving checkpoint for step 1000. 2019-07-27 08:46:46.436 [run-main-0] INFO Variables / Saver - Saving parameters to '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-1000'. 2019-07-27 08:46:46.870 [run-main-0] INFO Variables / Saver - Saved parameters to '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-1000'. 2019-07-27 08:46:46.880 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.696 s) Step: 1000, Loss: 63.2028 2019-07-27 08:46:46.887 [run-main-0] INFO Variables / Saver - Restoring parameters from '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-1000'. 2019-07-27 08:46:47.079 [run-main-0] INFO Learn / Hooks / Evaluation - Step 1000 Evaluator: 2019-07-27 08:46:47.080 [run-main-0] INFO Learn / Hooks / Evaluation - ╔═══════╤════════════╗ 2019-07-27 08:46:47.080 [run-main-0] INFO Learn / Hooks / Evaluation - ║ │ Accuracy ║ 2019-07-27 08:46:47.080 [run-main-0] INFO Learn / Hooks / Evaluation - ╟───────┼────────────╢ 2019-07-27 08:46:50.503 [run-main-0] INFO Learn / Hooks / Evaluation - ║ Train │ 0.0848 ║ 2019-07-27 08:46:51.176 [run-main-0] INFO Learn / Hooks / Evaluation - ║ Test │ 0.0805 ║ 2019-07-27 08:46:51.182 [run-main-0] INFO Learn / Hooks / Evaluation - ╚═══════╧════════════╝ 2019-07-27 08:46:53.333 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 6.453 s) Step: 1100, Loss: 89.3396 2019-07-27 08:46:55.671 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.338 s) Step: 1200, Loss: 49.0971 2019-07-27 08:46:57.945 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.274 s) Step: 1300, Loss: 60.7999 2019-07-27 08:47:00.081 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.136 s) Step: 1400, Loss: 23.6350 2019-07-27 08:47:02.289 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.208 s) Step: 1500, Loss: 20.9953 2019-07-27 08:47:04.474 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.184 s) Step: 1600, Loss: 12.6117 2019-07-27 08:47:06.740 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.267 s) Step: 1700, Loss: 12.0621 2019-07-27 08:47:09.053 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.313 s) Step: 1800, Loss: 14.9911 2019-07-27 08:47:11.742 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.689 s) Step: 1900, Loss: 8.6479 2019-07-27 08:47:13.968 [run-main-0] INFO Learn / Hooks / Checkpoint Saver - Saving checkpoint for step 2000. 2019-07-27 08:47:13.968 [run-main-0] INFO Variables / Saver - Saving parameters to '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-2000'. 2019-07-27 08:47:14.370 [run-main-0] INFO Variables / Saver - Saved parameters to '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-2000'. 2019-07-27 08:47:14.375 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.633 s) Step: 2000, Loss: 6.1439 2019-07-27 08:47:14.377 [run-main-0] INFO Variables / Saver - Restoring parameters from '/home/miyazawakohei/tensorflow-scala-0721/temp/mnist-mlp/model.ckpt-2000'. 2019-07-27 08:47:14.600 [run-main-0] INFO Learn / Hooks / Evaluation - Step 2000 Evaluator: 2019-07-27 08:47:14.602 [run-main-0] INFO Learn / Hooks / Evaluation - ╔═══════╤════════════╗ 2019-07-27 08:47:14.603 [run-main-0] INFO Learn / Hooks / Evaluation - ║ │ Accuracy ║ 2019-07-27 08:47:14.603 [run-main-0] INFO Learn / Hooks / Evaluation - ╟───────┼────────────╢ 2019-07-27 08:47:20.007 [run-main-0] INFO Learn / Hooks / Evaluation - ║ Train │ 0.0848 ║ 2019-07-27 08:47:20.975 [run-main-0] INFO Learn / Hooks / Evaluation - ║ Test │ 0.0805 ║ 2019-07-27 08:47:20.988 [run-main-0] INFO Learn / Hooks / Evaluation - ╚═══════╧════════════╝ 2019-07-27 08:47:22.912 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 8.537 s) Step: 2100, Loss: 8.0373 2019-07-27 08:47:25.258 [run-main-0] INFO Learn / Hooks / Loss Logger - ( 2.346 s) Step: 2200, Loss: 4.5865 ...
the scala code:
import org.platanios.tensorflow.api._ import org.platanios.tensorflow.api.core.types.UByte import org.platanios.tensorflow.api.implicits.helpers.{OutputStructure, OutputToDataType, OutputToShape} import org.platanios.tensorflow.api.learn.ClipGradientsByGlobalNorm import org.platanios.tensorflow.api.ops.Output import org.platanios.tensorflow.data.image.MNISTLoader //import org.platanios.tensorflow.examples import com.typesafe.scalalogging.Logger import org.slf4j.LoggerFactory import java.nio.file.Paths /** * @author Emmanouil Antonios Platanios */ object MNIST { private val logger = Logger(LoggerFactory.getLogger("Examples / MNIST")) // Implicit helpers for Scala 2.11. //implicit val evOutputStructureFloatLong : OutputStructure[(Output[Float], Output[Long])] = examples.evOutputStructureFloatLong //implicit val evOutputToDataTypeFloatLong: OutputToDataType[(Output[Float], Output[Long])] = examples.evOutputToDataTypeFloatLong //implicit val evOutputToShapeFloatLong : OutputToShape[(Output[Float], Output[Long])] = examples.evOutputToShapeFloatLong def main(args: Array[String]): Unit = { val dataSet = MNISTLoader.load(Paths.get("datasets/MNIST")) val trainImages = tf.data.datasetFromTensorSlices(dataSet.trainImages).map(_.toFloat) val trainLabels = tf.data.datasetFromTensorSlices(dataSet.trainLabels).map(_.toLong) val testImages = tf.data.datasetFromTensorSlices(dataSet.testImages).map(_.toFloat) val testLabels = tf.data.datasetFromTensorSlices(dataSet.testLabels).map(_.toLong) val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(256) .prefetch(10) val evalTrainData = trainImages.zip(trainLabels).batch(1000).prefetch(10) val evalTestData = testImages.zip(testLabels).batch(1000).prefetch(10) logger.info("Building the logistic regression model.") val input = tf.learn.Input(FLOAT32, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2))) val trainInput = tf.learn.Input(INT64, Shape(-1)) val layer = tf.learn.Flatten[Float]("Input/Flatten") >> tf.learn.Linear[Float]("Layer_0/Linear", 128) >> tf.learn.ReLU[Float]("Layer_0/ReLU", 0.1f) >> tf.learn.Linear[Float]("Layer_1/Linear", 64) >> tf.learn.ReLU[Float]("Layer_1/ReLU", 0.1f) >> tf.learn.Linear[Float]("Layer_2/Linear", 32) >> tf.learn.ReLU[Float]("Layer_2/ReLU", 0.1f) >> tf.learn.Linear[Float]("OutputLayer/Linear", 10) val loss = tf.learn.SparseSoftmaxCrossEntropy[Float, Long, Float]("Loss/CrossEntropy") >> tf.learn.Mean[Float]("Loss/Mean") >> tf.learn.ScalarSummary[Float]("Loss/Summary", "Loss") val optimizer = tf.train.YellowFin() val model = tf.learn.Model.simpleSupervised( input = input, trainInput = trainInput, layer = layer, loss = loss, optimizer = optimizer, clipGradients = ClipGradientsByGlobalNorm(5.0f)) logger.info("Training the linear regression model.") val summariesDir = Paths.get("temp/mnist-mlp") val accMetric = tf.metrics.MapMetric( (v: (Output[Float], (Output[Float], Output[Int]))) => { (tf.argmax(v._1, -1, INT64).toFloat, v._2._2.toFloat) }, tf.metrics.Accuracy("Accuracy")) val estimator = tf.learn.InMemoryEstimator( model, tf.learn.Configuration(Some(summariesDir)), tf.learn.StopCriteria(maxSteps = Some(100000)), Set( tf.learn.LossLogger(trigger = tf.learn.StepHookTrigger(100)), tf.learn.Evaluator( log = true, datasets = Seq(("Train", () => evalTrainData), ("Test", () => evalTestData)), metrics = Seq(accMetric), trigger = tf.learn.StepHookTrigger(1000), name = "Evaluator"), tf.learn.StepRateLogger(log = false, summaryDir = summariesDir, trigger = tf.learn.StepHookTrigger(100)), tf.learn.SummarySaver(summariesDir, tf.learn.StepHookTrigger(100)), tf.learn.CheckpointSaver(summariesDir, tf.learn.StepHookTrigger(1000))), tensorBoardConfig = tf.learn.TensorBoardConfig(summariesDir, reloadInterval = 1)) estimator.train(() => trainData, tf.learn.StopCriteria(maxSteps = Some(10000))) def accuracy(images: Tensor[UByte], labels: Tensor[UByte]): Float = { val predictions = estimator.infer(() => images.toFloat) predictions .argmax(1).toUByte .equal(labels).toFloat .mean().scalar } logger.info(s"Train accuracy = ${accuracy(dataSet.trainImages, dataSet.trainLabels)}") logger.info(s"Test accuracy = ${accuracy(dataSet.testImages, dataSet.testLabels)}") } }
The accuracy dose not increase in mnist example, while the loss decreases. How should we settle it?
the scala code: