Closed rrtjr closed 4 years ago
Hi, I reviewed the code again and saw that you used chunk.json for the training, unlike the tutorial which used VOC. This answered my issue. With this, did you use some tool to generate the vertices for the segments? Or you have internal scripts for that already?
We did not mention the segmentation vertices coordinates in the annotation. We used labelimg tool to annotate the images. You can find sample annotations here https://drive.google.com/drive/folders/1ID1sTk1VKHCBeeDEC_BzHBGRE7JkC0qq?usp=sharing
We then converted all of these annotations to json files using a custom script. We will be releasing that script soon
Thanks guys for an awesome job! :)
hi @DevashishPrasad where i can find these json files in the config as below
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file='/content/drive/My Drive/chunk.json',
img_prefix='/content/drive/My Drive/chunk_images/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'VOC2007/test.json',
img_prefix=data_root + 'VOC2007/Test/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'VOC2007/test.json',
img_prefix=data_root + 'VOC2007/Test/',
pipeline=test_pipeline))
thanks
Iam getting the same error. Dont see any conclusion on how this error is solved. Can someone post the fix here?
Hi, I am currently fine-tuning the pre-trained model (epoch36.pth) but I am encountering an error whenever I load my custom dataset generated using LabelImg.
I noticed specifically from the config file that the training pipeline requires masks to be enabled.
Is there something to be done when annotating using LabelImg that you guys did differently to indicate the existence of label masks? I saw the example provided and did the same but still getting an error about masks. I also set
with_mask=False
but I don't honestly know how relevant would that be to the whole training process.Example annotation from LabelImg:
Thank you and I appreciate this awesome work by the way.