BVLC / caffe

Caffe: a fast open framework for deep learning.
http://caffe.berkeleyvision.org/
Other
34.08k stars 18.7k forks source link

train examples/mnist/mnist_autoencoder_prototxt get some error #5795

Closed jdingjie closed 6 years ago

jdingjie commented 7 years ago

I train examples/mnist/mnist_autoencoder_prototxt net according readme.md but i get error below. I don not know what happened, who can help me ,thank you very much!

omnisky@omnisky:~/caffe-caffe-0.15$ ./build/tools/caffe train \

--solver=examples/mnist/mnist_autoencoder_solver.prototxt I0724 11:16:26.805635 17674 caffe.cpp:197] Using GPUs 0 I0724 11:16:26.806450 17674 caffe.cpp:202] GPU 0: Tesla P40 I0724 11:16:27.334847 17674 solver.cpp:48] Initializing solver from parameters: test_iter: 500 test_iter: 100 test_interval: 500 base_lr: 0.01 display: 100 max_iter: 65000 lr_policy: "step" gamma: 0.1 momentum: 0.9 weight_decay: 0.0005 stepsize: 10000 snapshot: 10000 snapshot_prefix: "examples/mnist/mnist_autoencoder" solver_mode: GPU device_id: 0 test_compute_loss: true net: "examples/mnist/mnist_autoencoder.prototxt" test_state { stage: "test-on-train" } test_state { stage: "test-on-test" } I0724 11:16:27.335038 17674 solver.cpp:91] Creating training net from net file: examples/mnist/mnist_autoencoder.prototxt I0724 11:16:27.335337 17674 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer data I0724 11:16:27.335350 17674 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer data I0724 11:16:27.335516 17674 net.cpp:52] Initializing net from parameters: name: "MNISTAutoencoder" state { phase: TRAIN } layer { name: "data" type: "Data" top: "data" include { phase: TRAIN } transform_param { scale: 0.0039215684 } data_param { source: "examples/mnist/mnist_train_lmdb" batch_size: 100 backend: LMDB } } layer { name: "flatdata" type: "Flatten" bottom: "data" top: "flatdata" } layer { name: "encode1" type: "InnerProduct" bottom: "data" top: "encode1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 1000 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "encode1neuron" type: "Sigmoid" bottom: "encode1" top: "encode1neuron" } layer { name: "encode2" type: "InnerProduct" bottom: "encode1neuron" top: "encode2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 500 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "encode2neuron" type: "Sigmoid" bottom: "encode2" top: "encode2neuron" } layer { name: "encode3" type: "InnerProduct" bottom: "encode2neuron" top: "encode3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 250 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "encode3neuron" type: "Sigmoid" bottom: "encode3" top: "encode3neuron" } layer { name: "encode4" type: "InnerProduct" bottom: "encode3neuron" top: "encode4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 30 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "decode4" type: "InnerProduct" bottom: "encode4" top: "decode4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 250 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "decode4neuron" type: "Sigmoid" bottom: "decode4" top: "decode4neuron" } layer { name: "decode3" type: "InnerProduct" bottom: "decode4neuron" top: "decode3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 500 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "decode3neuron" type: "Sigmoid" bottom: "decode3" top: "decode3neuron" } layer { name: "decode2" type: "InnerProduct" bottom: "decode3neuron" top: "decode2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 1000 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "decode2neuron" type: "Sigmoid" bottom: "decode2" top: "decode2neuron" } layer { name: "decode1" type: "InnerProduct" bottom: "decode2neuron" top: "decode1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } inner_product_param { num_output: 784 weight_filler { type: "gaussian" std: 1 sparse: 15 } bias_filler { type: "constant" value: 0 } } } layer { name: "loss" type: "SigmoidCrossEntropyLoss" bottom: "decode1" bottom: "flatdata" top: "cross_entropy_loss" loss_weight: 1 } layer { name: "decode1neuron" type: "Sigmoid" bottom: "decode1" top: "decode1neuron" } layer { name: "loss" type: "EuclideanLoss" bottom: "decode1neuron" bottom: "flatdata" top: "l2_error" loss_weight: 0 } I0724 11:16:27.335829 17674 layer_factory.hpp:77] Creating layer data I0724 11:16:27.337556 17674 net.cpp:94] Creating Layer data I0724 11:16:27.337611 17674 net.cpp:409] data -> data I0724 11:16:27.340441 17682 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb I0724 11:16:27.342648 17674 data_layer.cpp:78] ReshapePrefetch 100, 1, 28, 28 I0724 11:16:27.342761 17674 data_layer.cpp:83] output data size: 100,1,28,28 I0724 11:16:27.347226 17674 net.cpp:144] Setting up data I0724 11:16:27.347262 17674 net.cpp:151] Top shape: 100 1 28 28 (78400) I0724 11:16:27.347271 17674 net.cpp:159] Memory required for data: 313600 I0724 11:16:27.347290 17674 layer_factory.hpp:77] Creating layer data_data_0_split I0724 11:16:27.347311 17674 net.cpp:94] Creating Layer data_data_0_split I0724 11:16:27.347324 17674 net.cpp:435] data_data_0_split <- data I0724 11:16:27.347342 17674 net.cpp:409] data_data_0_split -> data_data_0_split_0 I0724 11:16:27.347363 17674 net.cpp:409] data_data_0_split -> data_data_0_split_1 I0724 11:16:27.347609 17674 net.cpp:144] Setting up data_data_0_split I0724 11:16:27.347647 17674 net.cpp:151] Top shape: 100 1 28 28 (78400) I0724 11:16:27.347662 17674 net.cpp:151] Top shape: 100 1 28 28 (78400) I0724 11:16:27.347671 17674 net.cpp:159] Memory required for data: 940800 I0724 11:16:27.347683 17674 layer_factory.hpp:77] Creating layer flatdata I0724 11:16:27.347709 17674 net.cpp:94] Creating Layer flatdata I0724 11:16:27.347721 17674 net.cpp:435] flatdata <- data_data_0_split_0 I0724 11:16:27.347745 17674 net.cpp:409] flatdata -> flatdata I0724 11:16:27.347805 17674 net.cpp:144] Setting up flatdata I0724 11:16:27.347820 17674 net.cpp:151] Top shape: 100 784 (78400) I0724 11:16:27.347827 17674 net.cpp:159] Memory required for data: 1254400 I0724 11:16:27.347837 17674 layer_factory.hpp:77] Creating layer flatdata_flatdata_0_split I0724 11:16:27.347851 17674 net.cpp:94] Creating Layer flatdata_flatdata_0_split I0724 11:16:27.347867 17674 net.cpp:435] flatdata_flatdata_0_split <- flatdata I0724 11:16:27.347884 17674 net.cpp:409] flatdata_flatdata_0_split -> flatdata_flatdata_0_split_0 I0724 11:16:27.347906 17674 net.cpp:409] flatdata_flatdata_0_split -> flatdata_flatdata_0_split_1 I0724 11:16:27.347980 17674 net.cpp:144] Setting up flatdata_flatdata_0_split I0724 11:16:27.347995 17674 net.cpp:151] Top shape: 100 784 (78400) I0724 11:16:27.348004 17674 net.cpp:151] Top shape: 100 784 (78400) I0724 11:16:27.348013 17674 net.cpp:159] Memory required for data: 1881600 I0724 11:16:27.348023 17674 layer_factory.hpp:77] Creating layer encode1 I0724 11:16:27.348048 17674 net.cpp:94] Creating Layer encode1 I0724 11:16:27.348059 17674 net.cpp:435] encode1 <- data_data_0_split1 I0724 11:16:27.348074 17674 net.cpp:409] encode1 -> encode1 F0724 11:16:27.352948 17695 blob.hpp:210] Check failed: data Check failure stack trace: @ 0x7f704d9175cd google::LogMessage::Fail() @ 0x7f704d919433 google::LogMessage::SendToLog() @ 0x7f704d91715b google::LogMessage::Flush() @ 0x7f704d919e1e google::LogMessageFatal::~LogMessageFatal() @ 0x7f704e00c875 caffe::BasePrefetchingDataLayer<>::InternalThreadEntry() @ 0x7f704dface65 caffe::InternalThread::entry() @ 0x7f7043e415d5 (unknown) @ 0x7f703d0e06ba start_thread @ 0x7f704c4b282d clone @ (nil) (unknown)

murrayliu commented 6 years ago

have you solved the problem yet? i got some similar issue :(

Noiredd commented 6 years ago

Please post usage, installation, or modeling questions, or other requests for help to the caffe-users list instead of Issues. This helps developers maintain a clear, uncluttered, and efficient view of the state of Caffe. Please read the guidelines for contributing before submitting an issue or a pull request.

When posting to the list, please check whether the source database (examples/mnist/mnist_train_lmdb) exists and is accessible. Also make sure you're using the newest release of Caffe and the old bugs are not an issue here.