Closed huaweiping closed 5 months ago
Hi, @huaweiping I think there might be three things you can do:
Hi @zhangxgu
Thanks so much for you reply! I just got some comparison and the prediction, which looks a bit weird to me.
Here's the groundtruth:
And this is the prediction:
The dataset is about the ocean eddy. I check the code and it seems the percentage figure above the bounding box indicates the score. Between two figures, only one object at the bottom-left of the prediction is correctly labeled as what it is supposed to be in the groundtruth.
The rest predicted features in the bottom looks fine for the bouding box but I don't find segmentation masks there. Some predicted features on the top-region are missing but this happens in other architecture so probably fine.
Do I misunderstand the meaning of the score or should I tweak the num_proposal in the configuration file?
I'll try tweak the learning rate and see if that will be helpful.
Thanks
Hi, I have a custom 512x512 dataset with 2 channels (set the third one to zero) and I want to train the model with that dataset. The dataset is a coco-like dataset and validated by detectron2. Everything looks fine except the training result. This is the result after 45000 iterations.:
The dataset only contain 2 classes so I modify the num_classes in the diffinst.coco.res50.yaml file and use diffinst.coco.res50.inst.yaml as the Instance Segmentation configuration file. This is the diffinst.coco.res50.yaml file:
The image crop size is modified to 512x512 in the base config file. I also abadon the pre-trained weights as this dataset is far from generall objects in either ImageNet or Coco dataset. Nothing else is significantly changed in the code or configuration file.
I believe I did something wrong but have no idea of that. Can anyone help me figure it out?