Open zhangliukun opened 5 years ago
This is likely a processing error somewhere in im.save
as called here:
https://github.com/facebookresearch/visdom/blob/e34c6d5abb23ed1e3333515af714c70b570e86c1/py/visdom/__init__.py#L956
Or from the actual initialization of the Image
object here:
https://github.com/facebookresearch/visdom/blob/e34c6d5abb23ed1e3333515af714c70b570e86c1/py/visdom/__init__.py#L954
If you take what's output in grid here: https://github.com/facebookresearch/visdom/blob/e34c6d5abb23ed1e3333515af714c70b570e86c1/py/visdom/__init__.py#L1025 And manually visualize those, do you still see the same errors?
If that's fine, can you check the value that's being passed to Image here:
https://github.com/facebookresearch/visdom/blob/e34c6d5abb23ed1e3333515af714c70b570e86c1/py/visdom/__init__.py#L953
As if the lines above didn't convert to between 0-255 uint8
's for all of the values, PIL
will fail to produce the correct image.
@zhangliukun are you using something like this to un-normalize? Did you find a solution for your problem?
I encountered to the same problem before. It is strange enough because for the same image, it cannot show the image while running the code or step over the vis.image function in debug mode. But, when I debugged STEP INTO vis.image (to init.py) and stepped the inner code one by one, I found it works fine, meanwhile, it could show the image normally in this executed way.
Oh, I thought I have solved it. As @shubhamagarwal92 mentioned above, the problem is the un-normalize process.
Sometimes because of the accuracy of the float calculating, it may led your tensors be a bit larger then 1. like 1.00000012, so that the code below will not be executed.
if img.max() <= 1: img = img * 255.
The solution is simple: truncating the max value of the tensor to 1 (using torch.clamp
). or converting the image to 0~255 by ourselves.
Yet I recommend to throw a warning if we found the img.max() > 1
but it is closed to 1. It must be more friendly. @JackUrb
@CM-BF you're right this should be done a little better. I think the most user friendly approach would actually be to print a warning and then fix the problem and render the image anyway.
The issue is here at the moment, as the check is hard-capped by 1: https://github.com/facebookresearch/visdom/blob/e34c6d5abb23ed1e3333515af714c70b570e86c1/py/visdom/__init__.py#L949
It would be appropriate to have an elif
for some value incrementally larger than 1 that would print the warning and then do the conversion anyway (as the output values should round down to 255 during the uint8 conversion).
I'd be happy to accept a PR to this effect, but if not this issue may stay on the backlog a little longer.
Also hit this. I think it would be better to rely on input's dtype for deciding on the range - like matplotlib does
Agreed - I'd be happy to accept a PR that makes this change! Code pointers in the discussion above.
Bug Description I use the mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] to normalize the image. Then I do some translates on these images. Finally, I restore these images to the normal state and use the visdom.images() to show the images. The problem is that sometimes these images are all black, I print the image matrix and all values are [0-1],which are right. Strangely,sometimes these images can draw normally.
Screenshots
This is the case: