Closed trungpham2606 closed 5 years ago
Hi @trungpham2606 and welcome back! Our line of thought was roughly as follows: The heads need information about the objects (encoded in the scene representation) and information whether any given object belongs to the reference category (encoded in the difference of scene and reference embeddings). Simply concatenating both allows the heads to decide, which information they use for which part of the prediction (region proposals, classification, bbox regression and segmentation). The additional 1x1 convolution was simply chosen to reduce the computational footprint as we assumed, that some of the information in both encodings is redundant. Does that make it clearer?
@michaelisc Iam trying to use your model to solve my problem that I will have a bunch of coins in a box and try to segment as many coins as possible. But besides the good segment results, the model also segment the whole box. Do you have any explanation or idea for coping with that situation ?
Do you have an example image? In general false positives are one of the biggest challenges our model has, so an additional detection of the box does not surprise me.
@michaelisc here's my result:
If all your images are very similar you could solve the problem with a simple hack: Throw away all large boxes. Alternatively you could create a new dataset from the good predictions and finetune Mask R-CNN or Siamese Mask R-CNN on that.
Oh and of course you can wait until we fix the fundamental problem but that may take a while ;D
@michaelisc what a trick =)))) I am looking forward to your upcoming fix ^^
@michaelisc Hello, Iam here again. I do not understand in matching part in your paper, can you tell me why you concatenated the differences (between the ref embedding & that of the scene) to the scene representation ? Thank in advance