torrvision / crfasrnn

This repository contains the source code for the semantic image segmentation method described in the ICCV 2015 paper: Conditional Random Fields as Recurrent Neural Networks. http://crfasrnn.torr.vision/
Other
1.34k stars 462 forks source link

multi_stage_meanfield.cpp:49] This implementation has not been tested batch size > 1. #136

Open fangxu622 opened 7 years ago

fangxu622 commented 7 years ago

Dear @bittnt ,

During training my CRFasRNN model, when I entered the command: python ../fcn.berkeleyvision.org/voc-fcn8s-atonce/solve.py

This problem has occurred

I0705 20:45:21.787677 25165 net.cpp:111] Creating Layer inference1
I0705 20:45:21.787684 25165 net.cpp:478] inference1 <- unary
I0705 20:45:21.787703 25165 net.cpp:478] inference1 <- Q0
I0705 20:45:21.787719 25165 net.cpp:478] inference1 <- data_data_0_split_2
I0705 20:45:21.787731 25165 net.cpp:434] inference1 -> pred
I0705 20:45:21.787744 25165 multi_stage_meanfield.cpp:49] This implementation has not been tested batch size > 1.

this is my train.prototxt, it's TVG_CRFRNN_new_traintest.prototxt that replace data layer with fcn8s-atonce's data layer

layer {
  name: "data"
  type: "Python"
  top: "data"
  top: "label"
  python_param {
    module: "voc_layers"
    layer: "SBDDSegDataLayer"
    param_str: "{\'sbdd_dir\': \'../fcn.berkeleyvision.org/data/fcn-zy3-512-1024/traindata\', \'seed\': 1337, \'split\': \'train\', \'mean\': (109.08637, 105.57667, 89.27919)}"
  }
}

layer {
  name: "conv1_1"
  type: "Convolution"
  bottom: "data"
  top: "conv1_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 100
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu1_1"
  type: "ReLU"
  bottom: "conv1_1"
  top: "conv1_1"
}
layer {
  name: "conv1_2"
  type: "Convolution"
  bottom: "conv1_1"
  top: "conv1_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu1_2"
  type: "ReLU"
  bottom: "conv1_2"
  top: "conv1_2"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1_2"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2_1"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu2_1"
  type: "ReLU"
  bottom: "conv2_1"
  top: "conv2_1"
}
layer {
  name: "conv2_2"
  type: "Convolution"
  bottom: "conv2_1"
  top: "conv2_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu2_2"
  type: "ReLU"
  bottom: "conv2_2"
  top: "conv2_2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2_2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv3_1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu3_1"
  type: "ReLU"
  bottom: "conv3_1"
  top: "conv3_1"
}
layer {
  name: "conv3_2"
  type: "Convolution"
  bottom: "conv3_1"
  top: "conv3_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu3_2"
  type: "ReLU"
  bottom: "conv3_2"
  top: "conv3_2"
}
layer {
  name: "conv3_3"
  type: "Convolution"
  bottom: "conv3_2"
  top: "conv3_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu3_3"
  type: "ReLU"
  bottom: "conv3_3"
  top: "conv3_3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3_3"
  top: "pool3"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv4_1"
  type: "Convolution"
  bottom: "pool3"
  top: "conv4_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu4_1"
  type: "ReLU"
  bottom: "conv4_1"
  top: "conv4_1"
}
layer {
  name: "conv4_2"
  type: "Convolution"
  bottom: "conv4_1"
  top: "conv4_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu4_2"
  type: "ReLU"
  bottom: "conv4_2"
  top: "conv4_2"
}
layer {
  name: "conv4_3"
  type: "Convolution"
  bottom: "conv4_2"
  top: "conv4_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu4_3"
  type: "ReLU"
  bottom: "conv4_3"
  top: "conv4_3"
}
layer {
  name: "pool4"
  type: "Pooling"
  bottom: "conv4_3"
  top: "pool4"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv5_1"
  type: "Convolution"
  bottom: "pool4"
  top: "conv5_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu5_1"
  type: "ReLU"
  bottom: "conv5_1"
  top: "conv5_1"
}
layer {
  name: "conv5_2"
  type: "Convolution"
  bottom: "conv5_1"
  top: "conv5_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu5_2"
  type: "ReLU"
  bottom: "conv5_2"
  top: "conv5_2"
}
layer {
  name: "conv5_3"
  type: "Convolution"
  bottom: "conv5_2"
  top: "conv5_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    engine: CAFFE
  }
}
layer {
  name: "relu5_3"
  type: "ReLU"
  bottom: "conv5_3"
  top: "conv5_3"
}
layer {
  name: "pool5"
  type: "Pooling"
  bottom: "conv5_3"
  top: "pool5"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "fc6"
  type: "Convolution"
  bottom: "pool5"
  top: "fc6"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4096
    kernel_size: 7
    engine: CAFFE
  }
}
layer {
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc7"
  type: "Convolution"
  bottom: "fc6"
  top: "fc7"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4096
    kernel_size: 1
    engine: CAFFE
  }
}
layer {
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "score-fr"
  type: "Convolution"
  bottom: "fc7"
  top: "score"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 11
    kernel_size: 1
    engine: CAFFE
    weight_filler {
      type: "msra"
      std: 0.010
    }
  }
}
layer {
  name: "score2"
  type: "Deconvolution"
  bottom: "score"
  top: "score2"
  param {
    lr_mult: 1
  }
  convolution_param {
    num_output: 11
    kernel_size: 4
    stride: 2
    weight_filler: { type: "bilinear" }
  }
}
layer {
  name: "score-pool4"
  type: "Convolution"
  bottom: "pool4"
  top: "score-pool4"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 11
    kernel_size: 1
    engine: CAFFE
    weight_filler {
      type: "msra"
      std: 0.010
    }
  }
}
layer { type: 'Crop' name: 'crop' bottom: 'score-pool4' bottom: 'score2'
  top: 'score-pool4c' }
layer {
  name: "fuse"
  type: "Eltwise"
  bottom: "score2"
  bottom: "score-pool4c"
  top: "score-fused"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "score4"
  type: "Deconvolution"
  bottom: "score-fused"
  top: "score4"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 11
    bias_term: false
    kernel_size: 4
    stride: 2
    weight_filler: { type: "bilinear" }
  }
}
layer {
  name: "score-pool3"
  type: "Convolution"
  bottom: "pool3"
  top: "score-pool3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 11
    kernel_size: 1
    engine: CAFFE
    weight_filler {
      type: "msra"
      std: 0.010
    }
  }
}
layer { type: 'Crop' name: 'crop' bottom: 'score-pool3' bottom: 'score4'
  top: 'score-pool3c' }
layer {
  name: "fuse"
  type: "Eltwise"
  bottom: "score4"
  bottom: "score-pool3c"
  top: "score-final"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "upsample"
  type: "Deconvolution"
  bottom: "score-final"
  top: "bigscore"
  param {
    lr_mult: 0
  }
  convolution_param {
    num_output: 11
    bias_term: false
    kernel_size: 16
    stride: 8
    weight_filler: { type: "bilinear" }
  }
}
layer { type: 'Crop' name: 'crop' bottom: 'bigscore' bottom: 'data' top: 'coarse' }

layer { type: 'Split' name: 'splitting'
  bottom: 'coarse' top: 'unary' top: 'Q0'
}

layer {
  name: "inference1"#if you set name "inference1", code will load parameters from caffemodel.
  type: "MultiStageMeanfield"
  bottom: "unary"
  bottom: "Q0"
  bottom: "data"
  top: "pred"
  param {
    lr_mult: 10000#learning rate for W_G
  }
  param {
  lr_mult: 10000#learning rate for W_B
  }
  param {
  lr_mult: 1000 #learning rate for compatiblity transform matrix
  }
  multi_stage_meanfield_param {
   num_iterations: 10
   compatibility_mode: POTTS#Initialize the compatilibity transform matrix with a matrix whose diagonal is -1.
   threshold: 2
   theta_alpha: 160
   theta_beta: 3
   theta_gamma: 3
   spatial_filter_weight: 3
   bilateral_filter_weight: 5
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "pred"
  bottom: "label"
  top: "loss"
  loss_param {
    ignore_label: 255
    normalize: false
  }
}

my solver.prototxt

train_net: "../fcn.berkeleyvision.org/voc-fcn8s-atonce/train.prototxt"
test_net: "../fcn.berkeleyvision.org/voc-fcn8s-atonce/val.prototxt"
test_iter: 847
# make test net, but don't invoke it from the solver itself
test_interval: 999999999
display: 20
average_loss: 20
lr_policy: "fixed"
# lr for unnormalized softmax
base_lr: 1e-10
# high momentum
momentum: 0.99
# no gradient accumulation
iter_size: 1
max_iter: 20000
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "../fcn.berkeleyvision.org/voc-fcn8s-atonce/snapshot-crfasrnn/train"
test_initialization: false

this my solve.py

import sys

sys.path.append('./python')
sys.path.append('./python/caffe')

sys.path.append('/home/sensetime/DeepLearning/test/fcn.berkeleyvision.org')

import caffe
import surgery, score

import numpy as np
import os

try:
    import setproctitle
    setproctitle.setproctitle(os.path.basename(os.getcwd()))
except:
    pass

weights = '/home/sensetime/DeepLearning/CRFasRNN/crfasrnn/fcn.berkeleyvision.org/voc-fcn8s-atonce/snapshot/train_iter_15000.caffemodel'

# init
caffe.set_device(2)
caffe.set_mode_gpu()

solver = caffe.SGDSolver('../fcn.berkeleyvision.org/voc-fcn8s-atonce/solver.prototxt')
solver.net.copy_from(weights)

# surgeries
interp_layers = [k for k in solver.net.params.keys() if 'up' in k]
surgery.interp(solver.net, interp_layers)

# scoring
val = np.loadtxt('../fcn.berkeleyvision.org/data/fcn-zy3-512-1024/valdata/val.txt', dtype=str)

for _ in range(20):
    solver.step(1000)
    score.seg_tests(solver, False, val, layer='score')
bittnt commented 7 years ago

For this, you could get rid of the error message by doing either 1. remove that error message in the multi_stagemeanfield code 2) use batch_size = 1.

fangxu622 commented 7 years ago

The most critical thing is my batch_size=1., I use voc-layer load data。。。

fangxu622 commented 7 years ago

Now, I use the <Latest Caffe with CPU/GPU CRF-RNN> ,The problem has gone away and a new problem has come..

I0706 15:28:25.721647 31458 net.cpp:744] Ignoring source layer score_pool3
I0706 15:28:25.721652 31458 net.cpp:744] Ignoring source layer score_pool3c
I0706 15:28:25.721657 31458 net.cpp:744] Ignoring source layer upscore8
I0706 15:28:25.721662 31458 net.cpp:744] Ignoring source layer score
I0706 15:28:30.210567 31458 solver.cpp:218] Iteration 0 (-1.55027e-33 iter/s, 4.46282s/20 iters), loss = 2.51438e+06
I0706 15:28:30.210670 31458 solver.cpp:237]     Train net output #0: loss = 2.51438e+06 (* 1 = 2.51438e+06 loss)
I0706 15:28:30.210688 31458 sgd_solver.cpp:105] Iteration 0, lr = 1e-10
F0706 15:28:34.982275 31458 eltwise_layer.cpp:34] Check failed: bottom[0]->shape() == bottom[i]->shape() bottom[0]: 1 11 512 512 (2883584), bottom[1]: 1 11 1024 1024 (11534336)
*** Check failure stack trace: ***
Aborted (core dumped)
fangxu622 commented 7 years ago

the fcn-model is the voc-fcn8s-atonce, I added the crfasRNN layer to fcn8s-atonce's train.prototxt , or the use TVG_CRFRNN_new_traintest.prototxt .it have the same problem...

fangxu622 commented 7 years ago

不晓得能和你讲中文吗?

fangxu622 commented 7 years ago

@bittnt

fangxu622 commented 7 years ago

I think the net bottom.shape is the same .but i don't know what something wrong

fangxu622 commented 7 years ago

thanks , I have resolve it with resize the input image to (512,512).....

matthiasmace commented 7 years ago

@bittnt What do you mean ? How to remove je error message in the code please ? I tried but it returned compilation errors