Open rcruzgar opened 2 years ago
Hi @rcruzgar,
Thanks for the issue. However, it goes beyond our scope to help you debug. We would suggest you run our provided tutorials (e.g., Cityscapes).
Cheers,
Hi!) Got the same issue with semantic segmentation. @rcruzgar, may be you've already solved it?
Thanks a lot!
Hello,
Thanks for reporting the issue. Unfortunately, if you want to train a semantic-only model, you could not use the trained panoptic checkpoints for initialization (as shown in the error log that the job fails to load the trained checkpoint). You need to train a new one by yourself.
Cheers,
It means that there are no pretrain models for semantic only in deeplab2 repo, right?
Hi, I attached restore_semantic_last_layer_from_initial_checkpoint : false with a textproto file like model_options { initial_checkpoint: path-to-pretrained-model (for me it was max_deeplab_l_backbone_os16_axial_deeplab_cityscapes_trainfine/ckpt-60000) restore_semantic_last_layer_from_initial_checkpoint: false
... } then it worked for my own semantic only dataset.
Hi,
I am trying to do semantic segmentation using the Panoptic-Deeplab example (https://github.com/google-research/deeplab2/blob/main/g3doc/projects/panoptic_deeplab.md) and setting this to false in the config file:
See the proto file (as txt to upload it here): resnet50_os16_semantic.txt, which is basically this.
I also downloaded the checkpoint resnet50_os16_panoptic_deeplab_coco_train.tar.gz, which I added to the proto file after the untar.
I would like also to attach my training annotations, as .txt to be able to upload it here, but it's actually .json.
I am running everything on a Jupyter Notebook environment from AWS Sagemaker, with a GPU.
I obtain the following error:
Note I set the crop size to 100. So after changing to 3 the crop size, I get:
I guess because it's iterating over a new image.
I run the training this way:
Could you please help me with any clue.
I am beginner on segmentation models, so I might be making incorrect assumptions.
Thanks a lot!