bodokaiser / piwise

Pixel-wise segmentation on VOC2012 dataset using pytorch.
BSD 3-Clause "New" or "Revised" License
385 stars 86 forks source link

Evaluation error #15

Closed leemathew1998 closed 6 years ago

leemathew1998 commented 6 years ago

In the assessment there was a mistake, I was not familiar with image segmentation, the problem is here RuntimeError: inconsistent tensor size, expected tensor [256 x 256] and mask [256] to have the same number of elements, but got 65536 and 256 elements respectively at d:\projects\pytorch\torch\lib\th\generic/THTensorMath.c:138 I feel that there is a problem here for label in range(1, len(self.cmap)): mask = gray_image[0] == label color_image[0][mask] = self.cmap[label][0] color_image[1][mask] = self.cmap[label][1] color_image[2][mask] = self.cmap[label][2]

bodokaiser commented 6 years ago

You probably have you images not as uint8 tensor type?

Am 19.01.2018 um 13:35 schrieb leemathew1998 notifications@github.com:

In the assessment there was a mistake, I was not familiar with image segmentation, the problem is here RuntimeError: inconsistent tensor size, expected tensor [256 x 256] and mask [256] to have the same number of elements, but got 65536 and 256 elements respectively at d:\projects\pytorch\torch\lib\th\generic/THTensorMath.c:138 I feel that there is a problem here for label in range(1, len(self.cmap)): mask = gray_image[0] == label color_image[0][mask] = self.cmap[label][0] color_image[1][mask] = self.cmap[label][1] color_image[2][mask] = self.cmap[label][2]

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bodokaiser/piwise/issues/15, or mute the thread https://github.com/notifications/unsubscribe-auth/ABsq8s-YFmbblww8Sjlsv5Oni6BLpCpBks5tMIwHgaJpZM4RkapA.

leemathew1998 commented 6 years ago

In transform.py, def call(self, gray_image): size = gray_image.size() colorimage = torch.ByteTensor(3, size[1], size[2]).fill(0)

    for label in range(1, len(self.cmap)):
        mask = gray_image[0] == label

        color_image[0][mask] = self.cmap[label][0]
        color_image[1][mask] = self.cmap[label][1]
        color_image[2][mask] = self.cmap[label][2]

I found size only two dimensions, like [256, 256],So I changed into size[0] and size[1], but the problem still exist

dilligencer-zrj commented 6 years ago

same question and expecting for solution

bodokaiser commented 6 years ago

same question and expecting for solution

Please consider your attitude. This is a hobby project of mine which I open sourced to give the community some inspiration. I neither disclaim that it is complete nor that this is part of any commercial product or marketing strategy, hence you cannot expect me to do free consultant work for you.

Any help or support I provide is based on goodwill. Please appreciate this. I previously pointed out a possible source for this error which however this was ignored. Furthermore I believe this issue to be problem dependent on your use case but still easy to with basic pytorch knowledge.

If you see your own time too valuable to fix this problem yourself we can discuss this as a consultant project. In this case please send me an email.