vlfeat / matconvnet

MatConvNet: CNNs for MATLAB
Other
1.4k stars 753 forks source link

Message type "caffe.NetParameter" has no field named "layer"? #257

Closed yaronmr closed 8 years ago

yaronmr commented 9 years ago

Hi everyone,

I'm trying to convert AlexNet using import-caffe.py getting the error below.

This is the default model from NVIDIA DIGITS/Caffe package. Anyone knows how to go about it?

Thanks

alexnet1/deploy.prototxt alexnet1/snapshot_iter_17580.caffemodel out2.foobar
Caffe varaint set to caffe
caffe_param.name alexnet1/deploy.prototxt
Loading Caffe CNN parameters from alexnet1/deploy.prototxt
Traceback (most recent call last):
  File "import-caffe.py", line 256, in <module>
    google.protobuf.text_format.Merge(args.caffe_param.read(), net_param)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/protobuf/text_format.py", line 265, in Merge
    return MergeLines(text.split('\n'), message)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/protobuf/text_format.py", line 298, in MergeLines
    _ParseOrMerge(lines, message, True)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/protobuf/text_format.py", line 229, in _ParseOrMerge
    _MergeField(tokenizer, message, allow_multiple_scalars)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/protobuf/text_format.py", line 356, in _MergeField
    message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 8:1 : Message type "caffe.NetParameter" has no field named "layer".

deploy.prototxt

name: "CaffeNet"

input: "data"
input_dim: 1
input_dim: 1
input_dim: 49
input_dim: 49
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 96
    kernel_size: 11
    stride: 4
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "norm1"
  type: "LRN"
  bottom: "conv1"
  top: "norm1"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "norm1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 256
    pad: 2
    kernel_size: 5
    group: 2
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.1
    }
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "norm2"
  type: "LRN"
  bottom: "conv2"
  top: "norm2"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "norm2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "conv4"
  type: "Convolution"
  bottom: "conv3"
  top: "conv4"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    group: 2
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.1
    }
  }
}
layer {
  name: "relu4"
  type: "ReLU"
  bottom: "conv4"
  top: "conv4"
}
layer {
  name: "conv5"
  type: "Convolution"
  bottom: "conv4"
  top: "conv5"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    group: 2
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.1
    }
  }
}
layer {
  name: "relu5"
  type: "ReLU"
  bottom: "conv5"
  top: "conv5"
}
layer {
  name: "pool5"
  type: "Pooling"
  bottom: "conv5"
  top: "pool5"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "fc6"
  type: "InnerProduct"
  bottom: "pool5"
  top: "fc6"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  inner_product_param {
    num_output: 4096
    weight_filler {
      type: "gaussian"
      std: 0.005
    }
    bias_filler {
      type: "constant"
      value: 0.1
    }
  }
}
layer {
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  inner_product_param {
    num_output: 4096
    weight_filler {
      type: "gaussian"
      std: 0.005
    }
    bias_filler {
      type: "constant"
      value: 0.1
    }
  }
}
layer {
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc8"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  inner_product_param {
    num_output: 2
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "fc8"
  top: "prob"
}
kilickaya commented 9 years ago

I have the same issue while trying to convert my fine-tuned AlexNet model to MatConvNet type.. I tried with different variants of --caffe-variant parameter but did not fix the problem.

vedaldi commented 9 years ago

Hi, could you try the new script import-caffe-dag.py and let us know whether that fixes the problem? It will give you a DAG instead of a SimpleNN but it will help us debugging the scripts.

Unfortunately Caffe is a bit of a moving target at the moment and we need to keep patching the import scripts.

On 7 Oct 2015, at 21:21, kilickaya notifications@github.com wrote:

I have the same issue while trying to convert my fine-tuned AlexNet model to MatConvNet type.. I tried with different variants of --caffe-variant parameter but did not fix the problem.

— Reply to this email directly or view it on GitHub https://github.com/vlfeat/matconvnet/issues/257#issuecomment-146316931.

naamarbel commented 8 years ago

Hi, I encountered the same issue. I seem to find only import-caffe.py file and not import-caffe-dag.py.

vedaldi commented 8 years ago

Hi, we recently consolidated the two scripts and merged in import-caffe.py.

Could you share your tuned model somewhere? Unfortunately Caffe is a bit of a moving target and it is sometimes difficult to guarantee compatibility with all the versions.

On 21 Jan 2016, at 15:28, naamarbel notifications@github.com wrote:

Hi, I encountered the same issue. I seem to find only import-caffe.py file and not import-caffe-dag.py.

— Reply to this email directly or view it on GitHub https://github.com/vlfeat/matconvnet/issues/257#issuecomment-173606142.

tharuniitk commented 7 years ago

Hi Andrea,

I am facing the same error for a long time with no solution. I am sharing my .caffe model with you. https://drive.google.com/drive/folders/0B2a8WzvR-lOxWEU1UGVDSC1EM0E?usp=sharing

Could you please convert it or a working script provided so that I can convert my models if necessary in near future,

Thanks in advance best

fzhuang3 commented 5 years ago

Try replacing all 'layer' with 'layers' in deploy.prototxt