Open arjunrajlab opened 3 years ago
This has to do with the histogram and min/max values that are used.
Suppose you have two frames that would get max-merged. Frame 1 has pixel brightness values ranging from 700 to 7000. Frame 2 has brightness values from 800 to 8000. Without max-merge, we show a preview image of frame 1 where 700 maps to 0 and 7000 to 255, possibly further modified by the range slider. When we switch to frame 2, this map changes to 800->0, 8000->255. The max merge is done on raw brightness values, so it will have a max brightness of 8000 and a min brightness no less than 800, but possibly a higher value.
Currently we use the frame's histogram for scaling, not the max-merge historgram for scaling. So on frame 1, the max merge is still scaled 700->0, 7000->255, and on frame 2 scaled 800->0, 8000->255. Since the scaling is different, it is a different image returned from the server (which causes the image to vanish until the new one is fetched).
Probably the correct thing to do is compute the actual histogram for the max-merge and use that. This adds internal complexity, but is probably what is desired.
This would still leave a condition where you max-merge on the one axis and scrub on another axis, the results would need to be recomputed along with a different histogram (e.g., a z max-merge and scrubbing on xy would still have to fetch a new image).
Yes, I think we probably want to show the max-merge histogram and use that for contrast. Scrubbing on XY would indeed have to load a different image. I would also add that for Z, I would recommend pre-computing the max-merge anyway, given that it's such a common operation.
If I set max-merge in one channel, that channel disappears when I scroll in z (with another channel being not max-merge). It does eventually appear, but seems to recompute the max-merge every time.