chainer / chainercv

ChainerCV: a Library for Deep Learning in Computer Vision
MIT License
1.48k stars 304 forks source link

Problems of FCIS #912

Closed otsukaoly closed 5 years ago

otsukaoly commented 5 years ago

I'm executing the sample of the instance segmentation to the particle image.

chainercv/chainercv/examples/fcis

I have installed chainer 5.0.0 and chainercv 0.11.0.

I found 3 problems below.

1. I crop the input image to make the number of instances smaller. There is no anchor inside the input image. So the error happens in 'argmax' function in the '_calc_ious' method of the 'AnchorTargetCreator' class. How can I fix it?

chainercv/links/model/faster_rcnn/utils/anchor_target_creator.py

def __call__(self, bbox, anchor, img_size):

    inside_index = _get_inside_index(anchor, img_H, img_W)
    anchor = anchor[inside_index]

def _calc_ious(self, anchor, bbox, inside_index):
    # ious between the anchors and the gt boxes
    ious = bbox_iou(anchor, bbox)
    argmax_ious = ious.argmax(axis=1)

2. It seems that the 'FCISTrainChain' class does not support the batch size which is greater than 1. I have to set 1 to the batch size, don't I?

chainercv/experimental/links/model/fcis/fcis_train_chain.py

3. If the number of instances is greater than 512, the error happens in the '_resize' function. It seems that the 'cv2' module is not compatible with the channel size of greater than 512. Do I have to make the number of instances to less than 512?

chainercv/transforms/image/resize.py

otsukaoly commented 5 years ago

I comment by myself.

About problem 1, I increased the crop size from 128 to 512. It was fixed.

otsukaoly commented 5 years ago

About problem 2, I have additional information.

The error happens in the 'concat_examples' function in the 'chainercv/examples/fcis/train.py' file.

The error message is below. "Exception in main training loop: all the input array dimensions except for the concatenation axis must match exactly"

In detail, the error happens at the line 164 in the 'chainer/dataset/convert.py' file.

return xp.concatenate([array[None] for array in arrays])

knorth55 commented 5 years ago

Problem 1. You can filter non-annotated data from your dataset as below. https://github.com/chainer/chainercv/blob/master/examples/fcis/train_coco_multi.py#L88-L98

Problem 2. FCIS now only support batchsize = 1. You need to modify FCIS or you can use MaskRCNN in examples/fpn/train_multi.py. https://github.com/chainer/chainercv/blob/master/examples/fpn/train_multi.py#L177-L181

Problem 3. I have no idea about _resize function limitation in OpenCV. There are some related discussions, so please read articles below. But easiest way is split into 2 images, resize and concat the images into one. https://stackoverflow.com/questions/41151632/is-there-a-max-array-size-limitation-for-the-opencv-resize-function https://answers.opencv.org/question/46296/increase-the-maximum-amount-of-channels-in-cvmat/

otsukaoly commented 5 years ago

Thank you for your comment. I tried your comments. Additional question and my comments are below.

Additonal question is for problem 2.

Problem 2.

Let me confirm, because I can't understand in detail.

I used chainercv 0.11.0, but I noticed that there train_coco_multi.py and train_sbd_multi.py in the latest version is 0.13.1 thanks to your advice. I can specify batchsize for arguments of these py files. These py files use SerialIterator class. Because FCISTrainChain class support only batchsize = 1, I can only specify batchsize = 1 for arguments of these py files actually. Is my understanding O.K.?

fpn/train_multi.py file which you advised, uses MultiprocessIterator class. By using MultiprocessIterator class, it is processed in multiple processes. So batchsize can be specified larger than 1. Is my understanding O.K.? As I have never used MultiprocessIterator class, I may be misunderstood

Comments by myself are for problem 1 and problem 3.

Problem 1.

I understand that I can fix it surely by following your adivice. But it takes time at first to confirm all data.

I consider the approach to read next data using SerialIterator or TransformDataset in case of no instance image. But there is another problem that the returned value of len function is not same as true value.

The problem happens because there is no instance in the cropped region. I modified my code to crop the image again until there is instances in the cropped region. I also set max retry times. In my case, I fixed it by the above way.

Problem 3.

I agree with your advice. I cropped the image and fixed the problem.

By the way, I also modified resize.py to deal with more than 513 channels by using PIL.

https://github.com/otsukaoly/chainercv/blob/master/transforms/image/resize.py

But it takes time in case of more than 513 channels.

knorth55 commented 5 years ago

I used chainercv 0.11.0, but I noticed that there train_coco_multi.py and train_sbd_multi.py in the latest version is 0.13.1 thanks to your advice. I can specify batchsize for arguments of these py files. These py files use SerialIterator class. Because FCISTrainChain class support only batchsize = 1, I can only specify batchsize = 1 for arguments of these py files actually. Is my understanding O.K.? fpn/train_multi.py file which you advised, uses MultiprocessIterator class. By using MultiprocessIterator class, it is processed in multiple processes. So batchsize can be specified larger than 1. Is my understanding O.K.? As I have never used MultiprocessIterator class, I may be misunderstood.

The problem is not related iterator. In FCIS, we use RPN for single batch, but we use multi batch RPN for FPN implementation. This is the correct reason. We are planning to migrate faster_rcnn_vgg and fcis implementation into multi batch rpn.

Problem 1. I understand that I can fix it surely by following your adivice. But it takes time at first to confirm all data. I consider the approach to read next data using SerialIterator or TransformDataset in case of no instance image. But there is another problem that the returned value of len function is not same as true value. The problem happens because there is no instance in the cropped region. I modified my code to crop the image again until there is instances in the cropped region. I also set max retry times. In my case, I fixed it by the above way.

There is another way to solve this problem. You can return 0 as return loss as below.

if len(bbox) == 0:
    loss = chainer.Variable(self.xp.array(0, dtype=self.xp.float32))
    loss.zerograd()
    return loss

By the way, I also modified resize.py to deal with more than 513 channels by using PIL. https://github.com/otsukaoly/chainercv/blob/master/transforms/image/resize.py But it takes time in case of more than 513 channels.

Ofcourse, PIL is slower than CV2.

otsukaoly commented 5 years ago

Thank you for your comment again.

Problem 2.

I understood the problem clearly. I wait fcis implementation is migrated into multi batch rpn.

Problem 1.

Thanks for your another solution.

I closed the issue.