Open ShriramGithub7 opened 1 year ago
It might be useful to write it this way: del checkpoint["model"]["detr.class_embed.weight"] del checkpoint["model"]["detr.class_embed.bias"]
It might be useful to write it this way: del checkpoint["model"]["detr.class_embed.weight"] del checkpoint["model"]["detr.class_embed.bias"]
Thanks Elm. It worked and I am not getting that error. However, I am still not able to create good model for Instant Segmentation. I am using 2 steps for training. Could you please advise if I am doing anything wrong?
Step-1: Training for bounding boxes -
!python main.py \ --dataset_file coco \ --coco_path "carDamageDataset" \ --output_dir "outputsSegment-x" \ --resume "detr-r50-e632da11.pth" \ --epochs 51 \ --lr=1e-4 \ --batch_size=1 \ --num_workers=1
If I do inferencing for bounding boxes then model created in step-1 gives good results (even though mAP is somewhere around 30%)
Step-2: Training for Instant Segmentation-
!python main.py \ --dataset_file coco \ --coco_path "carDamageDataset" \ --output_dir "outputsSegment-y" \ --resume "detr-r101-panoptic-40021d53.pth" \ --masks \ --lr_drop 200 \ --frozen_weights "outputsSegment-x/checkpoint.pth" \ --epochs 50 \ --lr=1e-4 \ --batch_size=1 \ --num_workers=1 \ --device='cpu'
Step-2 output is not good (mAP - 0.1% not even 1%) and its not able to create masks on images.
I feel issue might be in RESUME or some other parameter. Could you please provide your inputs or let me know how should model be trained in 2 steps for instant segmentation using transfer learning?
Hi Shriram,
It may be useful to delete --resume "detr-r101-panoptic-40021d53.pth"
.
When the model is initialized, the frozen_weights
will be loaded first, and if resume
exists, the frozen_weights
will be overridden by it. If this is the first time you train this model, this will cause you to lose the weights of the bounding boxes trained in the previous step.
It's fine to use --resume
when you reload the segmentation model you've trained.
In addition, if your batch_size
is quite small, it is suggested to reduce the learning rate accordingly
https://github.com/facebookresearch/detr/issues/149#issuecomment-657711794
Hi,
For object detection, I made below changes and training and inferencing was successful.
del checkpoint["model"]["class_embed.weight"] del checkpoint["model"]["class_embed.bias"]
For Instance Segmentation, I used below code, but getting error
checkpoint = torch.load('./detr-r50-panoptic-00ce5173.pth', map_location=None) del checkpoint["model"]["class_embed.weight"] del checkpoint["model"]["class_embed.bias"]
KeyError: 'class_embed.weight'
Please advise what changes needs to be made in code or how to delete pretrained weight and bias.
Regards, Shriram