Open lavenda-zhou opened 2 years ago
This shows that the patch with id 16 is not included in refined.pkl
. You may check if the refined.pkl
is consistent with patches/detail_dir/val
.
Thank you for your reply. I got the refined.json. Can I visualize the result? I want to see the final merged pircture.
You can refer to #15 for visualization
Thanks. I will look into it. Sorry to bother you again. I am still confused about the inference_coco.sh. The first step: python $BPR_ROOT/tools/split_patches.py . Through this step, I got the maskrcnn_r50.val.refined.json/patches. Is the result of it same with the maskrcnn_r50/patches that come from following command :the IOU_THRESH=0.15 \ sh tools/prepare_dataset_coco.sh \ mask_rcnn_r50.train.segm.json \ mask_rcnn_r50.val.segm.json \ maskrcnn_r50 \ 70000 Why do you run split_patches.py again in the inference part? Thank you for your patience in answering my questions earlier
The difference between them is that the former only process the validation set and the latter also process the training set. For the validation set, both do the same thing.
Thank you for your nice reply. I got it. Does it mean that the IOU_THRESH need to be the same in these two spliting process? I found if I use the different IOU_THRESH. The number of patches is not the same. I got 1899 patches in the maskrcnn_r50/patches/mask_dir/val and 2488 patches in the maskrcnn_r50.val.refined.json/patches/mask_dir/val. The refined.pkl is not consistent with the maskrcnn_r50.val.refined.json/patches/detail_dir/val because the refined.pkl only uses 1899 patches. Therefore, I met the first question that I asked you. Does the different IOU_THRESH cause this problem? And I have another question. How can I get a series of AP like AP50 AP75 APS APM APL after I got the merged picture. Thank you in advance.
The IOU_THRESHs of training and validation sets don't need to be the same.
But the same validation set should the same throughout the inference.
E.g. if you generated the refined.pkl
on maskrcnn_r50/patches/mask_dir/val
, then you need to use the same path for merge_patches.py
.
For AP evaluation, you can refer #24.
Hello, author.I completed the whole process with your help. Thank you. Then I add some pictures in my dataset. Tha problems occurs in the first step. When I run the prepare_dataset_coco.sh, the problem is followed.
(open-mmlab) mint@mint-B460HD3:~/BPR-main$ IOU_THRESH=0.15 sh tools/prepare_dataset_coco.sh ms_rcnn_r50.train.segm.json ms_rcnn_r50.val.segm.json ms_rcnn_r50
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "././tools/filter.py", line 65, in
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "././tools/split_patches.py", line 291, in
I don't know what is wrong with my dataset. I look forward to your reply, which is very important to me. Thank you in advance!
I have faced a similar issue and I think I have resolved it. I am assuming you have used Mask RCNN as your segmenter(?). When calling the detect in the inspect_coco_model.ipynb
modules the notebooks call model.load_image_gt()
which in turn calls utils.resize_image()
before running through the detector. This reshapes the image size and does not undo it again later. So there is a chance that your output binary masks are set to the shape defined here whilst the images in your original instance set have the old size.
So when you are detecting the image you do not want to call utils.resize_image()
beforehand which model.load_image_gt()
does. I made a small change to this function as below by introducing a resize flag in model.load_image_gt()
. The utils.resize_image()
function accepts a mode="none"
argument which will extract the important meta data without resizing the image. You may want to visualise your instances after performing this change to check the masks are correct still.
Note that resize_image
is also called in model.detect()
but this transform is reversed after the detection so you want to leave this unchanged. This is important here because Mask RCNN can only accept fixed size images across samples.
Also note if you have built the Mask RCNN with the setup.py you will need to re-run this so your script picks up the change to the model lib.
def load_image_gt(dataset, config, image_id, augment=False, augmentation=None,
use_mini_mask=False, resize=True):
if resize == False:
mode = "none"
else:
mode = config.IMAGE_RESIZE_MODE
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
original_shape = image.shape
image, window, scale, padding, crop = utils.resize_image(
image,
min_dim=config.IMAGE_MIN_DIM,
min_scale=config.IMAGE_MIN_SCALE,
max_dim=config.IMAGE_MAX_DIM,
mode=mode)
mask = utils.resize_mask(mask, scale, padding, crop)
Hello, author. I have the following problems in the process of inference. reassemble ...
python ./tools/merge_patches.py ms_rcnn_r50.val.segm.json data/car-coco/annotations/instances_val2017.json ms_rcnn_r40.val.refined.json/refined.pkl ms_rcnn_r40.val.refined.json/patches/detail_dir/val ms_rcnn_r40.val.refined.json/refined.json loading annotations into memory... Done (t=0.00s) creating index... index created! 0%| | 0/137 [00:00<?, ?it/s] multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/mint/anaconda3/envs/open-mmlab/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "./tools/merge_patches.py", line 38, in run_inst patch_mask = results[pid] KeyError: 16 """ The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "./tools/merge_patches.py", line 110, in start() File "./tools/merge_patches.py", line 80, in start for r in p.imap_unordered(run_inst, enumerate(dt)): File "/home/mint/anaconda3/envs/open-mmlab/lib/python3.7/multiprocessing/pool.py", line 748, in next raise value KeyError: 16 I look forward to your reply, which is very important to me. thank you!