Open tanveer6715 opened 2 years ago
Hi, could you train and test normally with your checkpoints trained from ConcreteDamageCityScapes datasets?
Hi, could you train and test normally with your checkpoints trained from ConcreteDamageCityScapes datasets?
Yes it works well with dist_test.sh but shows same error when testing with single GPU. But I need to find the inference speed. Could you explain how can I do it?
Could you use dist_test.sh with 1 GPU to inference?
I think it should be unrelated with single or multiple GPU, and you can checkout your training log to see whether ConcreteDamageCityScapes
is registered in MMCV, theoretically it should be registered like below:
https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/datasets/isprs.py#L6-L7
Could you use dist_test.sh with 1 GPU to inference?
I think it should be unrelated with single or multiple GPU, and you can checkout your training log to see whether
ConcreteDamageCityScapes
is registered in MMCV, theoretically it should be registered like below:https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/datasets/isprs.py#L6-L7
Yes dist_test.sh with 1 GPU also works. The ConcreteDamageCityScapes is already registered that's why it get successfully trained. But I experienced using custom data show an error with benchmark.py script and also mmsegmentation do not support custom model to find flops or parameters of model.
I dont know what causes this problem. It seems to be very old bug or issue when using any custom module. Please check this issue. It is same like mine but I couldnt find any proper solution. https://github.com/open-mmlab/mmdetection/issues/3751
I dont know what causes this problem. It seems to be very old bug or issue when using any custom module. Please check this issue. It is same like mine but I couldnt find any proper solution. open-mmlab/mmdetection#3751
I opened this link and found it has been fixed. You could try to take solutions mentioned in this link.
I dont know what causes this problem. It seems to be very old bug or issue when using any custom module. Please check this issue. It is same like mine but I couldnt find any proper solution. open-mmlab/mmdetection#3751
I opened this link and found it has been fixed. You could try to take solutions mentioned in this link.
Yes I tried the proposed solutions but it still create problem when testing the custom data.
I also encounter the same bug using single gpu testing, while multiple GPU testing works works well. I am sure that I have checked the link, and the customer dataset have been registered.
I am experiencing the same problem, the dataset configs are correctly imported in the mmdet.datasets module but are somehow not available in the registry when firing up training. I am using an extension of the CocoDataset and even that one is not available. Creating a custom dataset cfg on top of the CocoDataset type results in the same problem. Did you guys find a solution? @zwbx @tanveer6715
Hi. I have create my own dataset and train the SegFormer successfully but when I check the inference speed of SegFormer using benchmark.py script it shows the following error as attached. I guess mmsegmentation do not support custom dataset or models to find inference speed or model parameters.