Closed JayeShen1996 closed 2 years ago
@JayeShen1996 Hi! Are you conducting visualization of trained models, or testing on a standalone test set to obtain miou? If you mean testing like other datasets such as VOC, the training itself should include validation?
Hello, maybe my expression is not clear. My data set is divided into training set, validation set and test set. I input the training and validation set into the model for training and validation. Now I want to see the results of my model on the test set. What should I do? In addition, my data was renamed according to VOC rules.
Hello, maybe my expression is not clear. My data set is divided into training set, validation set and test set. I input the training and validation set into the model for training and validation. Now I want to see the results of my model on the test set. What should I do? In addition, my data was renamed according to VOC rules.
I see. May I ask what is your current respective val-test accuracies?
You can try --state=3 --continue-from=weight file
and modify the dataset class to load your test set. Other things should be same as validation and do not need changing.
Hello, maybe my expression is not clear. My data set is divided into training set, validation set and test set. I input the training and validation set into the model for training and validation. Now I want to see the results of my model on the test set. What should I do? In addition, my data was renamed according to VOC rules.
I see. May I ask what is your current respective val-test accuracies?
My verification set MIou is over 58 and the test set is not implemented yet.
You can try
--state=3 --continue-from=weight file
and modify the dataset class to load your test set. Other things should be same as validation and do not need changing.
I use semantic segmentation code. Is the code you mentioned directly added to the existing.sh file, or is it a separate sh file for testing?
Thank you for your reply. Another problem that bothered me was how to visualize the results of the validation set in PNG or other image data format. If this is possible, I divide the data into training sets and validation sets.
@JayeShen1996 For modifying the dataset class, you can modify this function to get image & mask lists.
For visualizations, I have some new visualization techniques implemented in https://github.com/voldemortX/pytorch-auto-drive/blob/master/tools/vis_tools.py and https://github.com/voldemortX/pytorch-auto-drive/blob/master/visualize_segmentation.py that is recommended for reference. However, that repo currently do not support visualization of the PASCAL VOC dataset structure. You can try combine the vis funcs there and the dataset classes here to visualize the validation set results.
I use semantic segmentation code. Is the code you mentioned directly added to the existing.sh file, or is it a separate sh file for testing?
FYI, it is not an existing.sh. You can just run python main.py --state=3 --continue-from=<pt file> <other args you may need>
in commandline.
I use semantic segmentation code. Is the code you mentioned directly added to the existing.sh file, or is it a separate sh file for testing?
FYI, it is not an existing.sh. You can just run
python main.py --state=3 --continue-from=<pt file> <other args you may need>
in commandline.
Thank you. I'll keep trying.
Hello. According to your guidance, I tested the Validation set, and the results are as follows.
Results of validation set in training: average row correct: ['99.60', '54.81'] IoU: ['95.86', '52.38'] mean IoU: 74.12 Epoch time: 112.59s
The code used for validation python main.py --state=3 --train-set=2 --sets-id=1 --mixed-precision --continue-from=dmt-voc-20-5--i.pt --coco --mixed-precision
Results of validation average row correct: ['100.00', '0.00'] IoU: ['96.63', '0.00'] mean IoU: 48.31
And I used the code of pseudo labels to generate .npy files for the weight of deeplabv2 network. The classification results are "0".I'm very upset. Do you know what's wrong? Thank you for your reply.
Hello. According to your guidance, I tested the Validation set, and the results are as follows.
Results of validation set in training: average row correct: ['99.60', '54.81'] IoU: ['95.86', '52.38'] mean IoU: 74.12 Epoch time: 112.59s
The code used for validation python main.py --state=3 --train-set=2 --sets-id=1 --mixed-precision --continue-from=dmt-voc-20-5--i.pt --coco --mixed-precision
Results of validation average row correct: ['100.00', '0.00'] IoU: ['96.63', '0.00'] mean IoU: 48.31
And I used the code of pseudo labels to generate .npy files for the weight of deeplabv2 network. The classification results are "0".I'm very upset. Do you know what's wrong? Thank you for your reply.
Are you using a imagenet pre-trained model? In that case you may need to remove --coco
I seem to know the source of the problem. It should be the issue of pre-trained weight. When I do not load the pre-trained weight, there can be some results, and when I load, the results are all background. In addition, I did use "convert_coco_resnet101.py" to set up the pre-trained weight.
Hello. According to your guidance, I tested the Validation set, and the results are as follows. Results of validation set in training: average row correct: ['99.60', '54.81'] IoU: ['95.86', '52.38'] mean IoU: 74.12 Epoch time: 112.59s The code used for validation python main.py --state=3 --train-set=2 --sets-id=1 --mixed-precision --continue-from=dmt-voc-20-5--i.pt --coco --mixed-precision Results of validation average row correct: ['100.00', '0.00'] IoU: ['96.63', '0.00'] mean IoU: 48.31 And I used the code of pseudo labels to generate .npy files for the weight of deepl
Hello. According to your guidance, I tested the Validation set, and the results are as follows. Results of validation set in training: average row correct: ['99.60', '54.81'] IoU: ['95.86', '52.38'] mean IoU: 74.12 Epoch time: 112.59s The code used for validation python main.py --state=3 --train-set=2 --sets-id=1 --mixed-precision --continue-from=dmt-voc-20-5--i.pt --coco --mixed-precision Results of validation average row correct: ['100.00', '0.00'] IoU: ['96.63', '0.00'] mean IoU: 48.31 And I used the code of pseudo labels to generate .npy files for the weight of deeplabv2 network. The classification results are "0".I'm very upset. Do you know what's wrong? Thank you for your reply.
Are you using a imagenet pre-trained model? In that case you may need to remove
--coco
I seem to know the source of the problem. It should be the issue of pre-trained weight. When I do not load the pre-trained weight, there can be some results, and when I load, the results are all background. In addition, I did use "convert_coco_resnet101.py" to set up the pre-trained weight.
@JayeShen1996 In DMT default setting we use 2 models start from imagenet (-i) and coco (-c) weights respectively. They have different input scale and RGB channel order to match the respective pre-trained weights. So if you use --coco
for a imagenet initiated model, it will fail, since it was trained without --coco
.
@JayeShen1996 In DMT default setting we use 2 models start from imagenet (-i) and coco (-c) weights respectively. They have different input scale and RGB channel order to match the respective pre-trained weights. So if you use
--coco
for a imagenet initiated model, it will fail, since it was trained without--coco
.
Now I understand. Thank you for your reply. Good luck.
It seems this issue is resolved, I'll close for now. Feel free to reopen.
Hello, your paper and code are very good, thank you for your efforts. Now I have a question to ask you, the details are as follows: First of all, I conducted experiments on my own data, and the results have been obtained. How can I use these weights to test test sets? In addition, I used dmT-VOC-20-1__p5 -- I , and use the training model to test, the effect is very poor, I do not know when the test method is correct.
您好,您的论文和代码非常棒,感谢您的付出。现在我有个问题想请教您,具体如下:首先我是在我自己的数据上进行实验,且已经跑出结果。我如何能够用这些权值来测试测试集?此外,我使用了dmt-voc-20-1__p5--i的权重,并利用训练的模型来进行测试,效果很差,我不知道测试方法时候正确。