Closed noahzhy closed 5 months ago
yes, will be up soon
yes, will be up soon
Could you please share when will you public the training manual?
Apologies for the delayed response. Unfortunately, I haven't been able to find the time to update training scripts/env due to some urgent personal commitment.
However, you can try referring to the MMDetection docs for training instructions. I'm attaching the lineformer config file that you can use for this process.
You will have to create the dataset in ms-coco format, with object (segmentation) masks encoded using rle. (refer the sample annotation file). More info on the format can be found here
Hope this helps.
Attached Files
val_coco_annot.json lineformer_config.txt
Change the '.txt' extension in config file attached here to '.py'
can you provide instructions for training?
can you provide train and test data? or how you have create data ?. It is confusing in the paper. The file 'val_coco_annot.json' you provided, all image files having same height and width. If you resize the image then how you can create mask image because {x,y} points from ground truth will change. If I resize the image after creating mask then rle encoding is not correct.
Hi @sazidrj. Sorry for the late response. I have added the dataset generation notebooks inside 'data_processing' directory. To answer your query, in DataFormat.ipynb, you'll see the code snippet 'generate_mask_for_clean_img', where once the images are resized for batch training (maintaining aspect ratio), the corresponding image transformation matrix is also used to transform the ground-truth x,y coordinates and from these, the mask is being generated.
Thank you for your response
is there a manual to train this?