Closed machengcheng2016 closed 6 years ago
I'm sorry for the failure of your experiments. First, I have re-tested the code and found it could work properly. Note that the output is "label map" ([0,1,2,3,4,5] for UAV images and VH images ), I guess you may havn't transformed the label map to color map? This is an easy operation. Second, If you use our trained model weights and UAV_infer.py / VH_infer.py, you should first split the orignial images into 256*256 patches (with or without overlap). As the UAV images have not been publicly available, I recommend you test the model with VH data. The difference between the UAV_infer and VH_infer lies: UAV_infer takes images (RGB, 3 channels) as input, VH_infer takes imges + nDSM (CIR+nDSM, 4 channels) as input.
It's so kind of you to re-test the code! After transforming the label map to color map, I did get colorful label map! The problem is solved! As for the training process of UAV images, I still have two problem: 1) In the beginning of training, I convert image, label and edge data(all these data are prepared as ".jpg" images) to LMDB format data to feed the network. According to the paper, labels should be {0,1,2,3,4,5} and edges should be {0,1}. So is it proper to prepare labels and edges as ".jpg" gray images? Since the range of an image is [0,255], the labels may turn into {0,51,102,153,204,255} and edges turn into {0,255}...
2) The first problem really confuses me, and I finally decide to convert image, label and edge data to HDF5 data format. The HDF5 format can store float data without the restrict of [0,255]. So this time labels and edges are exactly prepared as {0,1,2,3,4,5} and {0,1}. But for images, I still scale images to [0,255] integers. Should I scale images to [0,1] float number? How do you judge my process? Thank you for your help!
Use .png rather than .jpg may fix your problem.
Thanks!
Hello! Came here from your published paper! Impressive work of this project I have to say! But here are two questions to ask... First, I test the provided pre-trained model "ern_UAV.caffemodel" on the UAV dataset, but all the "score" results come out as black images, which means all the segmentation labels are zero. I wonder why is this happening. Maybe the preprocessing code in "UAV_infer.py" is not proper? Second, specifically, I'm using your "UAV_infer.py" to run the test, and is the "transform the input image" part necessary? As a result, could you please provide a little piece of test data and an exact Python script to let me finish the test? My Gmail address is: machengcheng2016@gmail.com Appreciate it!