Open xhappy opened 6 years ago
Yes. I have tested this code on linux platform. But I am not very familiar with caffe on linux. Maybe in some specific case, it will give some errors. I can try to help you after you show me the error messages. Best.
Hi, Thanks for your quickly replies.
Well, I download Caffe for linux from the link: https://github.com/BVLC/caffe, put your codes to directory examples, and modified the Makefile (added the header file to INCLUDE_DIRS variables), also modified your codes (header on *.cpp file only), and got the errors as below:
CXX examples/pseudo-3d/caffe_add_layers/src/pooling3d_layer.cpp
examples/pseudo-3d/caffe_add_layers/src/bn_layer.cpp: In instantiation of ‘void caffe::BNLayer
Could you please help to fix it? Thanks in advance.
https://github.com/facebook/C3D/tree/master/C3D-v1.1 You can use this project. It is good.
@jetyingjia Thanks for your reply. But I also encountered compiling errors when using the C3D project link as you provided. The errors are as below:
CXX examples/pseudo-3d/caffe_add_layers/src/bn_layer.cpp
CXX examples/pseudo-3d/caffe_add_layers/src/pooling3d_layer.cpp
examples/pseudo-3d/caffe_add_layers/src/bn_layer.cpp: In instantiation of ‘void caffe::BNLayer
I found that the caffe.proto file on this project contains following structure:
message BNParameter { optional FillerParameter slope_filler = 1; optional FillerParameter bias_filler = 2; optional float momentum = 3 [default = 0.9]; optional float eps = 4 [default = 1e-5]; // If true, will use the moving average mean and std for training and test. // Will override the lr_param and freeze all the parameters. // Make sure to initialize the layer properly with pretrained parameters. optional bool frozen = 5 [default = false]; enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 6 [default = DEFAULT]; }
But why it don't contain bn_param parmeter? How to add this parameter to this file caffe.proto ?
Now, only remained following errors:
CXX examples/pseudo-3d/caffe_add_layers/src/bn_layer.cpp
examples/pseudo-3d/caffe_add_layers/src/bn_layer.cpp: In instantiation of ‘void caffe::BNLayer
Hi, For the bn_param, you need to add this parameter to your LayerParameter in "caffe.proto". For example, optional BNParameter bn_param = 155; in "message LayerParameter", the "155" depends on your layer ID. Best.
Hi ZhoFan, Have added following codes to caffe.proto, but compiling errors as above......
message LayerParameter { optional BNParameter bn_param = 155; }
Hi, you add this line to the existed LayerParameter or create a new LayerParamter? You need to find the LayerParameter in you "caffe.proto". There are some existed layer parameter definitions in LayerParameter, and then you can add the paramters for new layers to LayerParameter. Best.
I add above total structure to caffe.proto which is downloaded from your project here. I don't find any caffe.proto files except the caffe.proto downloaded from your project...., And In your caffe.proto files, there are not any structures named LayerParameter, that's why I added the total structure(only contain bn_param) to the file.
// NOTE // Update the next available ID when you add a new LayerParameter field. // // LayerParameter next available layer-specific ID: 147 (last added: recurrent_param) message LayerParameter { optional string name = 1; // the layer name optional string type = 2; // the layer type repeated string bottom = 3; // the name of each bottom blob repeated string top = 4; // the name of each top blob
// The train / test phase for computation. optional Phase phase = 10;
// The amount of weight to assign each top blob in the objective. // Each layer assigns a default value, usually of either 0 or 1, // to each top blob. repeated float loss_weight = 5;
// Specifies training parameters (multipliers on global learning constants, // and the name and other settings used for weight sharing). repeated ParamSpec param = 6;
// The blobs containing the numeric parameters of the layer. repeated BlobProto blobs = 7;
// Specifies whether to backpropagate to each bottom. If unspecified, // Caffe will automatically infer whether each input needs backpropagation // to compute parameter gradients. If set to true for some inputs, // backpropagation to those inputs is forced; if set false for some inputs, // backpropagation to those inputs is skipped. // // The size must be either 0 or equal to the number of bottoms. repeated bool propagate_down = 11;
// Rules controlling whether and when a layer is included in the network, // based on the current NetState. You may specify a non-zero number of rules // to include OR exclude, but not both. If no include or exclude rules are // specified, the layer is always included. If the current NetState meets // ANY (i.e., one or more) of the specified rules, the layer is // included/excluded. repeated NetStateRule include = 8; repeated NetStateRule exclude = 9;
// Parameters for data pre-processing. optional TransformationParameter transform_param = 100;
// Parameters shared by loss layers. optional LossParameter loss_param = 101;
// Layer type-specific parameters. // // Note: certain layers may have more than one computational engine // for their implementation. These layers include an Engine type and // engine parameter for selecting the implementation. // The default for the engine is set by the ENGINE switch at compile-time. optional AccuracyParameter accuracy_param = 102; optional ArgMaxParameter argmax_param = 103; optional BatchNormParameter batch_norm_param = 139; optional BNParameter bn_param = 155; optional BiasParameter bias_param = 141; optional ConcatParameter concat_param = 104; optional ContrastiveLossParameter contrastive_loss_param = 105; optional ConvolutionParameter convolution_param = 106; optional CropParameter crop_param = 144; optional DataParameter data_param = 107; optional DropoutParameter dropout_param = 108; optional DummyDataParameter dummy_data_param = 109; optional EltwiseParameter eltwise_param = 110; optional ELUParameter elu_param = 140; optional EmbedParameter embed_param = 137; optional ExpParameter exp_param = 111; optional FlattenParameter flatten_param = 135; optional HDF5DataParameter hdf5_data_param = 112; optional HDF5OutputParameter hdf5_output_param = 113; optional HingeLossParameter hinge_loss_param = 114; optional ImageDataParameter image_data_param = 115; optional InfogainLossParameter infogain_loss_param = 116; optional InnerProductParameter inner_product_param = 117; optional InputParameter input_param = 143; optional LogParameter log_param = 134; optional LRNParameter lrn_param = 118; optional MemoryDataParameter memory_data_param = 119; optional MVNParameter mvn_param = 120; optional ParameterParameter parameter_param = 145; optional PoolingParameter pooling_param = 121; optional Pooling3DParameter pooling3d_param = 156; optional PowerParameter power_param = 122; optional PReLUParameter prelu_param = 131; optional PythonParameter python_param = 130; optional RecurrentParameter recurrent_param = 146; optional ReductionParameter reduction_param = 136; optional ReLUParameter relu_param = 123; optional ReshapeParameter reshape_param = 133; optional ScaleParameter scale_param = 142; optional SigmoidParameter sigmoid_param = 124; optional SoftmaxParameter softmax_param = 125; optional SPPParameter spp_param = 132; optional SliceParameter slice_param = 126; optional TanHParameter tanh_param = 127; optional ThresholdParameter threshold_param = 128; optional TileParameter tile_param = 138; optional WindowDataParameter window_data_param = 129; }
message Pooling3DParameter {
enum PoolMethod {
MAX = 0;
AVE = 1;
STOCHASTIC = 2;
}
optional PoolMethod pool = 1 [default = MAX]; // The pooling method
// Pad, kernel size, and stride are all given as a single value for equal
// dimensions in height and width or as Y, X pairs.
optional uint32 pad = 2 [default = 0]; // The padding size (equal in Y, X)
optional uint32 pad_h = 5 [default = 0]; // The padding height
optional uint32 pad_w = 6 [default = 0]; // The padding width
optional uint32 pad_l = 7 [default = 0]; // The padding length
optional uint32 kernel_size = 3; // The kernel size (square)
optional uint32 kernel_h = 8; // The kernel height
optional uint32 kernel_w = 9; // The kernel width
optional uint32 kernel_l = 10; // The kernel length
optional uint32 stride = 4 [default = 1]; // The stride (equal in Y, X)
optional uint32 stride_h = 11; // The stride height
optional uint32 stride_w = 12; // The stride width
optional uint32 stride_l = 13 [default = 1]; // The stride length
enum Engine {
DEFAULT = 0;
CAFFE = 1;
CUDNN = 2;
}
optional Engine engine = 14 [default = DEFAULT];
// If global_pooling then it will pool over the size of the bottom by doing
// kernel_h = bottom->height and kernel_w = bottom->width
optional bool global_pooling = 15 [default = false];
}
message BNParameter { optional FillerParameter slope_filler = 1; optional FillerParameter bias_filler = 2; optional float momentum = 3 [default = 0.9]; optional float eps = 4 [default = 1e-5]; // If true, will use the moving average mean and std for training and test. // Will override the lr_param and freeze all the parameters. // Make sure to initialize the layer properly with pretrained parameters. optional bool frozen = 5 [default = false]; enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 6 [default = DEFAULT]; }
Your own caffe include caffe.proto,you just need to modify the first one LayerParameter: optional BNParameter bn_param = 155; optional Pooling3DParameter pooling3d_param = 156; and add the last two layers Pooling3DParameter ,BNParameter .
@xhappy P3D can be compiled on Linux platform. Check the step instructions here: https://zhuanlan.zhihu.com/p/35359709
Hello ZhaoFan,
Thanks for your great work.
I would like to consult whether your codes can be compiled and run on Caffe for Linux without any modifications except updating configurations. I suspect that it will be "NO" since I got many compiling error under Linux....Could you please help to point it? I only need answer "Yes" Or "NO", a simple reply. Thanks in advance.