Closed maohule closed 7 years ago
If you are not using the default one, you might see this error because multi_stage_meanfield_param has changed to multi_stage_meanfield_param_str. Check the relevant post.
@bittnt Thank you for your fast response.But I can not solve my problem.Then I see my caffe.proto in train_CRF_RNN_NEW/caffe/src/caffe/proto/. I find the multi_stage_meanfield_param does exit in the LayerParameter as follows. I have another question about running the solve.py which I can not find where to use the caffe/build/tools/caffe to train model. But I find TVG_CRFRNN.sh in caffe/examples/segmentationcrfasrnn/ has TOOLS=../../../caffe/build/tools. Is this some relevant to my problem? I am a freshman in this field.Can you give me some more details?Thanks. `message LayerParameter { optional string name = 1; // the layer name optional string type = 2; // the layer type repeated string bottom = 3; // the name of each bottom blob repeated string top = 4; // the name of each top blob
// The train / test phase for computation. optional Phase phase = 10;
// The amount of weight to assign each top blob in the objective. // Each layer assigns a default value, usually of either 0 or 1, // to each top blob. repeated float loss_weight = 5;
// Specifies training parameters (multipliers on global learning constants, // and the name and other settings used for weight sharing). repeated ParamSpec param = 6;
// The blobs containing the numeric parameters of the layer. repeated BlobProto blobs = 7;
// Specifies on which bottoms the backpropagation should be skipped. // The size must be either 0 or equal to the number of bottoms. repeated bool propagate_down = 11;
// Rules controlling whether and when a layer is included in the network, // based on the current NetState. You may specify a non-zero number of rules // to include OR exclude, but not both. If no include or exclude rules are // specified, the layer is always included. If the current NetState meets // ANY (i.e., one or more) of the specified rules, the layer is // included/excluded. repeated NetStateRule include = 8; repeated NetStateRule exclude = 9;
// Parameters for data pre-processing. optional TransformationParameter transform_param = 100;
// Parameters shared by loss layers. optional LossParameter loss_param = 101;
// Layer type-specific parameters. // // Note: certain layers may have more than one computational engine // for their implementation. These layers include an Engine type and // engine parameter for selecting the implementation. // The default for the engine is set by the ENGINE switch at compile-time. optional AccuracyParameter accuracy_param = 102; optional ArgMaxParameter argmax_param = 103; optional ConcatParameter concat_param = 104; optional ContrastiveLossParameter contrastive_loss_param = 105; optional ConvolutionParameter convolution_param = 106; optional DataParameter data_param = 107; optional DropoutParameter dropout_param = 108; optional DummyDataParameter dummy_data_param = 109; optional EltwiseParameter eltwise_param = 110; optional EmbedParameter embed_param = 137; optional ExpParameter exp_param = 111; optional FlattenParameter flatten_param = 135; optional HDF5DataParameter hdf5_data_param = 112; optional HDF5OutputParameter hdf5_output_param = 113; optional HingeLossParameter hinge_loss_param = 114; optional ImageDataParameter image_data_param = 115; optional InfogainLossParameter infogain_loss_param = 116; optional InnerProductParameter inner_product_param = 117; optional LogParameter log_param = 134; optional LRNParameter lrn_param = 118; optional MemoryDataParameter memory_data_param = 119; optional MVNParameter mvn_param = 120; optional PoolingParameter pooling_param = 121; optional PowerParameter power_param = 122; optional PReLUParameter prelu_param = 131; optional PythonParameter python_param = 130; optional ReductionParameter reduction_param = 136; optional ReLUParameter relu_param = 123; optional ReshapeParameter reshape_param = 133; optional SigmoidParameter sigmoid_param = 124; optional SoftmaxParameter softmax_param = 125; optional SPPParameter spp_param = 132; optional SliceParameter slice_param = 126; optional TanHParameter tanh_param = 127; optional ThresholdParameter threshold_param = 128; optional TileParameter tile_param = 138; optional WindowDataParameter window_data_param = 129; optional MultiStageMeanfieldParameter multi_stage_meanfield_param = 146; } `
Add these to prototxt:
#This is part of FCN, coarse is the variable coming from FCN
layer { type: 'Crop' name: 'crop' bottom: 'bigscore' bottom: 'data' top: 'coarse' }
#This layer is used to split the output of FCN into two, which is required by CRF-RNN
layer { type: 'Split' name: 'splitting'
bottom: 'coarse' top: 'unary' top: 'Q0'
}
layer {
name: "inference1"#if you set name "inference1", code will load parameters from caffemodel. Otherwise it will create a new layer with manually set parameters
type: "MultiStageMeanfield" #type of this layer
bottom: "unary" #input from FCN
bottom: "Q0" #input from FCN
bottom: "data" #input image
top: "pred" #output of CRF-RNN
param {
lr_mult: 10000#learning rate for W_G
}
param {
lr_mult: 10000#learning rate for W_B
}
param {
lr_mult: 1000 #learning rate for compatiblity transform matrix
}
multi_stage_meanfield_param {
num_iterations: 10 #Number of iterations for CRF-RNN
compatibility_mode: POTTS#Initialize the compatilibity transform matrix with a matrix whose diagonal is -1.
threshold: 2
theta_alpha: 160
theta_beta: 3
theta_gamma: 3
spatial_filter_weight: 3
bilateral_filter_weight: 5
}
}
@bittnt Thank you very much. It works now.
@bittnt When I train the model followed by martinkersner/train-CRF-RNN, I meet the problem as follows: xupt@xupt:~/WL_project/train-CRF-RNN$ python solve.py 2>&1 | tee train.log WARNING: Logging before InitGoogleLogging() is written to STDERR I0102 10:04:24.963816 21120 solver.cpp:48] Initializing solver from parameters: test_iter: 261 test_interval: 1333 base_lr: 1e-13 display: 50 max_iter: 100000 lr_policy: "fixed" momentum: 0.99 weight_decay: 0.0005 snapshot: 1000 snapshot_prefix: "/home/xupt/WL_project/train-CRF-RNN/train" solver_mode: GPU net: "TVG_CRFRNN_COCO_VOC_TRAIN_3_CLASSES.prototxt" test_initialization: false I0102 10:04:24.963953 21120 solver.cpp:91] Creating training net from net file: TVG_CRFRNN_COCO_VOC_TRAIN_3_CLASSES.prototxt [libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 743:31: Message type "caffe.LayerParameter" has no field named "multi_stage_meanfield_param". F0102 10:04:24.964298 21120 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: TVG_CRFRNN_COCO_VOC_TRAIN_3_CLASSES.prototxt Check failure stack trace: What's wrong with this?Thank you very much!