Closed BestSonny closed 9 years ago
Is still anyone having this issue?
I ALSO have such problem. Have you solved it?
The solution is to read the error message -- it says that a string is crossing a line boundary on line 20. Most likely a closing quote is missing from a line.
For example, if you look at the above prototxt you'll see that source: "levelDB/icdar_test_lmdb
is missing the end quote "
.
I also have the similar problem, I use windows 7 as the platform to compile "caffe", while when I run the codes
Build\x64\Release\caffe.exe train --solver=examples/mnist/lenet_solver.prototxt pause
it meet the problem as the picture below, can someone give me tips for the solution?
This is my first time to touch caffe framework, and plan to use it understand deep learning algorithm.
do some fix as the message type show "caffe.LayerParameter" has no field named "layer", change the "layer" as the standard format
oh,how to solve it??
When I type command "../caffe/build/tools/caffe train --solver=solver.prototxt",I just get the errors below.
libprotobuf ERROR google/protobuf/text_format.cc:172] Error parsing text-format caffe.NetParameter: 20:37: String literals cannot cross line boundaries. F1014 20:49:10.959270 5557 upgrade_proto.cpp:623] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: train_val.prototxt
Am I doing wrong? My net is just like this:
name: "MyNet" layers { name: "data" type: DATA top: "data" top: "label" data_param { source: "levelDB/icdar_train_lmdb" backend: LMDB batch_size: 256 } include: { phase: TRAIN } } layers { name: "data" type: DATA top: "data" top: "label" data_param { source: "levelDB/icdar_test_lmdb backend: LMDB batch_size: 50 } include: { phase: TEST } } layers { name: "conv1" type: CONVOLUTION bottom: "data" top: "conv1" blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 convolution_param { num_output: 48 kernel_size: 9 stride: 1 } } layers { name: "drop1" type: DROPOUT bottom: "conv1" top: "conv1" dropout_param { dropout_ratio: 1 } } layers { name: "conv2" type: CONVOLUTION bottom: "conv1" top: "conv2" blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 convolution_param { num_output: 64 kernel_size: 9 stride: 1 } } layers { name: "drop2" type: DROPOUT bottom: "conv2" top: "conv2" dropout_param { dropout_ratio: 0.5 } } layers { name: "convCaseInsensive" type: CONVOLUTION bottom: "conv2" top: "convCaseInsensive" blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 convolution_param { num_output: 128 kernel_size: 8 stride: 1 } } layers { name: "drop3" type: DROPOUT bottom: "convCaseInsensive" top: "convCaseInsensive" dropout_param { dropout_ratio: 0.5 } } layers { name: "convCaseInsensiveSecond" type: CONVOLUTION bottom: "convCaseInsensive" top: "convCaseInsensiveSecond" blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 convolution_param { num_output: 37 kernel_size: 1 stride: 1 } } layers { name: "drop4" type: DROPOUT bottom: "convCaseInsensiveSecond" top: "convCaseInsensiveSecond" dropout_param { dropout_ratio: 0.5 } } layers { name: "accuracy" type: ACCURACY bottom: "convCaseInsensiveSecond" bottom: "label" top: "accuracy" include: { phase: TEST } } layers { name: "loss" type: SOFTMAX_LOSS bottom: "convCaseInsensiveSecond" bottom: "label" top: "loss" }