Closed manyaafonso closed 7 years ago
Hi, you have at least two options:
replace DenseImageData with an Input and use pycaffe: This is done for you in the compute_bn_statistics script...
name: "segnet"
input: "data"
input_dim: ## batch size
input_dim: ## channels
input_dim: ## height
input_dim: ## width
....
output = net.forward(data=image)
reference Segnet-Tutorial where the suggestion is to have a "dummy.png" e.g. all-0 png file, and your test.txt would read:
image1.png dummy.png
image2.png dummy.png
...
Then you can have a silence layer to quiet the output of "label".
Thanks, @nathanin ! The second option with the dummy label file worked.
Hi, I have found SegNet to be an excellent method achieving 90% prediction accuracy on my data. Many thanks to the authors for sharing this implementation!
However, I have not figured out how to test on a completely new image for which no labels are known. I am guessing something will have to change in the test prototxt: name: "VGG_ILSVRC_16_layer" layer { name: "data" type: "DenseImageData" top: "data" top: "label" dense_image_data_param { source: "/home/mafonso/my_dataset/test.txt" # Change this to the absolute path to your data file batch_size: 1 } } For training I followed the same format as the VOC data set for the list of files. But with a completely new image, I do not have any labels. If I comment the line top: "label" I still get an error.
Can anyone please suggest a solution?