napari / napari

napari: a fast, interactive, multi-dimensional image viewer for python
https://napari.org
BSD 3-Clause "New" or "Revised" License
2.19k stars 422 forks source link

Inconsistent default size of points when creating a points layer using PointsData #4705

Open haesleinhuepf opened 2 years ago

haesleinhuepf commented 2 years ago

šŸ› Bug

The default size of points, when creating a layer using PointsData might be computed from the current size of the field of view (= image size * voxel size)

To Reproduce

import napari
import numpy as np

Assuming the field of view is big, points are hardly visible:

viewer = napari.Viewer()

viewer.add_image(np.zeros((1000,1000)))
viewer.add_points([(20, 20)])

napari.utils.nbscreenshot(viewer)

image

In case the field-of-view is smaller, points are super big:

viewer = napari.Viewer()

viewer.add_image(np.zeros((10,10)))
viewer.add_points([(5, 5)])

napari.utils.nbscreenshot(viewer)

image

Expected behavior

I'm not sure where internally those layers are built, but one could build in this line to make the points have a size of 1% of the field-of-view.

points_layer.size = np.asarray(viewer.dims.range).max() * 0.01

This sould make points have the same size in both cases demonstrated above. ... as discussed here.

If you like the solution and can point me to the code where this might make sense in napari-core, I'm happy to send a PR.

Environment

napari: 0.4.15
Platform: Windows-10-10.0.19044-SP0
Python: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:50:36) [MSC v.1929 64 bit (AMD64)]
Qt: 5.15.3
PyQt5: 5.15.4
NumPy: 1.22.4
SciPy: 1.8.1
Dask: 2022.6.0
VisPy: 0.9.6

OpenGL:
- GL version: 4.6.0 - Build 30.0.101.1029
- MAX_TEXTURE_SIZE: 16384

Screens:
- screen 1: resolution 1920x1200, scale 1.0

Plugins:
- PlatyMatch: 0.0.3
- RedLionfish: 0.5
- animation: 0.0.2
- clEsperanto: 0.18.0
- console: 0.0.4
- devbio_napari: 0.5.6
- napari-3d-ortho-viewer: 0.0.1
- napari-accelerated-pixel-and-object-classification: 0.8.1
- napari-assistant: 0.3.8
- napari-brightness-contrast: 0.1.7
- napari-clusters-plotter: 0.5.1
- napari-crop: 0.1.5
- napari-curtain: 0.1.1
- napari-folder-browser: 0.1.3
- napari-layer-details-display: 0.1.3
- napari-mouse-controls: 0.1.3
- napari-plot-profile: 0.2.1
- napari-plugin-search: 0.1.3
- napari-process-points-and-surfaces: 0.3.1
- napari-segment-blobs-and-things-with-membranes: 0.2.22
- napari-simpleitk-image-processing: 0.3.1
- napari-stress2: 0.0.15
- napari-tabu: 0.1.5
- napari-time-slicer: 0.4.6
- napari-tools-menu: 0.1.15
- napari-workflow-inspector: 0.2.2
- napari-workflow-optimizer: 0.1.4
- napari_skimage_regionprops1: 0.5.2
- napari_skimage_regionprops2: 0.5.2
- ome-types: 0.3.0
- scikit-image: 0.4.15
- svg: 0.1.6
- the-segmentation-game: 0.1.0

Additional context

Czaki commented 2 years ago

I cannot agree that it is a bug. It could be a feature request. The current point size is determined in pixels. As all other sizes in napari. So when you zoom, also size of points is increasing proportionally.

If I good remember this behavior is used when visualizing proteins for example (@brisvag work).
For me, such use cases cannot be broken by implementing this feature.

brisvag commented 2 years ago

Yeah, I disagree that this should be default behaviour. This also would make scripts non-deterministic (which we generally tried to avoid in the past), because just depending on the window size, you'd end up with different point size.

This fall back on a old discussion about Points: sometimes they indicate, well, points (and thus their size only has relative meaning), but sometimes (like in the example @Czaki cited of molecular structures, or in cases where they are used to indicate a feature in an image with a defined size) their size has absolute meaning.

I guess one way around it would be to accept (and set as default) a special value of None (or auto) which does what you suggest, but only if the layer is in a Viewer. Of course, this will break once we have multicanvas... Also, note that with add_points the layer gets constructed first, then added to the viewer, so there's also that complication to work around if you want this to work on instantiation.

brisvag commented 2 years ago

A more reasonable approach might be to set the default field of view to something bigger.

haesleinhuepf commented 2 years ago

Can you guys make a concrete proposal how to make points visible, e.g. if a user does spot detection on a 2048x2048 large image? At the moment the points cannot be seen as they are too small. From a user's perspective this is clearly a bug.

Czaki commented 2 years ago

Can you guys make a concrete proposal how to make points visible, e.g. if a user does spot detection on a 2048x2048 large image? At the moment the points cannot be seen as they are too small. From a user's perspective this is clearly a bug.

this is totally different from what you mention in this issue. Growing point if it is too small it is a good idea. @brisvag it is doable in vispy?

haesleinhuepf commented 2 years ago

this is totally different

Yeah sorry, I used 1000x1000 in my minimal-reproducible-example code snipped posted above, not 2048x2048. Thanks for pointing this out

brisvag commented 2 years ago

The real issue that makes them invisible is that antialiasing is enabled, and it uses transparency which is broken in vispy. To get around this, there are two ways:

  1. disable antialiasing (Points._antialias = 0); downside, points will be jagged when bigger.
  2. set a lower canvas size limit (#3734). Downside: get_value does not know about it, and will not see the point if you are outside of its "real" size. Downside: won't work well with a ton of points, because they will overlap each other more since they're bigger.

I think it might be also reasonable to set a low, non-zero value for 2. by default just to make sure they don't simply disappear!

Czaki commented 2 years ago

Yeah sorry, I used 1000x1000 in my minimal-reproducible-example code snipped posted above, not 2048x2048. Thanks for pointing this out

No. You suggest having point size independent of zoom factor. Not make small points visible.

haesleinhuepf commented 2 years ago

You suggest having point size independent of zoom factor.

Maybe, I wasn't clear. I'm not talking about independence from zoom level.

I proposed "The default size of points, when creating a layer using PointsData might be computed from the current size of the field of view (= image size * voxel size)"

The Zoom-level is not in the proposed equation:

points_layer.size = np.asarray(viewer.dims.range).max() * 0.01
brisvag commented 2 years ago

@haesleinhuepf I'm a bit confused, your proposal and your examples seem to say different things to me.

  1. The field of view definitey includes the zoom level, since higher zoom = smaller field of view. The size of the field of view is determined by canvas size (and shape!) and camera zoom.
  2. image size * voxel size might be the screen size of a specific image layer, or the world space size (depending on what you mean by voxel size)
  3. viewer.dims.range.max() is another thing again: the extent of the biggest dimension in world space accounting for all layers.

Which one do you mean? I would say that I'm against 1. since it depends on fleeting things like window size. I'm also against 2., because it would require choosing an image, and would incur all the complications I mentioned in my previous comment. As for 3, that's a bit more straight forward, but is also not as trivial as you make it seem (what about n-dimensional data? Tiny points because you have a long time-series?). [EDIT: I guess you can only use the displayed dimensions... but then again, how do we solve the issues I raised before?]

In general, I think we would need to do so many compromising/special casing, that this is not really reasonable to do as a default.

So I would instead aim to solve the actual problem: if tiny points being invisible is the problem at the core, I suggest you use _antialias = 0 until we have a better solution for transparency. If instead you would prefer point size being limited, I would use experimental_canvas_size_limits.

haesleinhuepf commented 2 years ago
  1. The field of view definitey includes the zoom level,

Sorry for using the wrong term here. I mean the area that was imaged (image size * voxel size). Hence the entire shown scene, independent from zoom level. Again I am NOT asking for anything that is zoom-related. 3 is the thing I implemented now for all operations that create points and are listed in the tools menu. Thus, the problem is solved on my side. Feel free to close this issue.

So I would instead aim to solve the actual problem

The actual problem is that the points are too small in case of large images and too big in case of small images shown in the viewer.

brisvag commented 2 years ago

The actual problem is that the points are too small in case of large images and too big in case of small images shown in the viewer.

Then this sounds like it's exactly the use case for which experimental_canvas_size_limits was created :) Is there a reason why you don't think this is the right approach for you in this case? As far as I can tell, absolute point sizes have no meaning in your use case, so this shouldn't be an issue. My impression is that 3. is just too finnicky :P

Will close for now, but anyone feel free to reopen if you think there is a way to do this better.

haesleinhuepf commented 2 years ago

Just for the record: experimental_canvas_size_limits and _alias also don't offer a nice default size for points:

import napari
import numpy as np

viewer = napari.Viewer()

viewer.add_image(np.zeros((1000,1000)))
points_layer = viewer.add_points([(20, 20)], experimental_canvas_size_limits=(1000, 1000))
points_layer._antialias = 0

napari.utils.nbscreenshot(viewer)

image

My intention was to make it easy for developers to show points that are just visible per default. If the user wanted to visualize points with specific sizes, they still could.

psobolewskiPhD commented 2 years ago

I think it may be the name experimental_canvas_size_limits that is adding confusion? Looking at the original PR https://github.com/napari/napari/pull/3734

it's a tuple that defines low and high limits for the canvas size of the points.

The default min is 0 and max is 10K, while you are setting the min and max both to 1K. Then your GL driver is limiting the actual render size (see https://github.com/vispy/vispy/issues/2078) Perhaps try points_layer = viewer.add_points([(20, 20)], experimental_canvas_size_limits=(10, 1000)) and points_layer._antialias = 0

Czaki commented 2 years ago

I stil do not see how it should work:

  1. Change the size of point based on zoom - points added on different zoom levels will have a different size?
  2. Change point size until the first point is added? Then keep it to have the consistency of size between points (until the user modifies it from the interface)

And what should happen with points loaded from a disc, from a format that does not contain information about the size? There should be still added with size 10? OR size should also be dynamically calculated?

@haesleinhuepf looking to documentation:

https://github.com/napari/napari/blob/f810a6e24992af5c623b7bb79c3964dbb7670d2f/napari/layers/points/points.py#L145-L146

haesleinhuepf commented 2 years ago

Sorry guys, that's all workarounds for a problem that should be resolved at the source: when points are added without size specification, napari core must make sure the points are visible.

brisvag commented 2 years ago

My intention was to make it easy for developers to show points that are just visible per default. If the user wanted to visualize points with specific sizes, they still could.

Yup, that's indeed what experimental_canvas_size_limits was added for (see #429 for the original issue).

The default min is 0 and max is 10K, while you are setting the min and max both to 1K. Then your GL driver is limiting the actual render size (see vispy/vispy#2078) Perhaps try points_layer = viewer.add_points([(20, 20)], experimental_canvas_size_limits=(10, 1000)) and points_layer._antialias = 0

Exactly right!

Sorry guys, that's all workarounds for a problem that should be resolved at the source: when points are added without size specification, napari core must make sure the points are visible.

Yes, I agree that Points should be visible to avoid confusion. But experimental_canvas_size_limits is that core solution, not a workaround. Maybe we should default it to a lower limit greater than zero, but I don't think that's a lesser solution than setting the size to a value computed on the fly.

Otherwise, we bite the bullet and accept that layer init defaults depend on the viewer context (other layers loaded and/or canvas state) and if the layer is instantiated from the viewer, or separately (viewer.add_layer(Points()) will behave different from add_points()). If past discussions are anything to go by, then I expect many people to be against this (as I am) due to bad reproducibility and unnecessary "magic".

psobolewskiPhD commented 2 years ago

Why is the experimental_canvas_size_limits min default 0? I think making that say 10, maybe 15 would go a long way towards solving this? (along with _antialias = <.3 on my screen at least)

brisvag commented 2 years ago

Because it was added as an experimental feature, and setting it to 0 would change nothing about the behaviour at the time. I'm open to changing the default, though I think in that case we should un-experimentalize it and document it, in case someone prefers their points to have a real absolute size.

Czaki commented 2 years ago

Maybe we should not change it by default, but change it for the points layer created by the button? And maybe builtin points reaer?

brisvag commented 2 years ago

I disagree, because of the points I made above about reproducibility and magic. I'd also like hear what @jni has to say, since he was interested in the experimental_canvas_size_limits discussion.

haesleinhuepf commented 2 years ago

Otherwise, we bite the bullet and accept that layer init defaults depend on the viewer context

Now I understand the problem you have on the viewer side! Thanks for the explanation.

Wouldn't it make sense to build this logic into a place where the viewer adds the layer and only set the computed default in case the user didn't specify it?

In my use case I'm working with functions that return PointsData (and they have to return that type) that are automagically added to the viewer... Those functions don't call viewer.add_something...

Czaki commented 2 years ago

but magicgui allows returning size for points. On your side you could change return style to LayerData and return (data, {"size": 100}, "points"). This may allow you to solve this problem on your side and give us more time to find a proper solution.

haesleinhuepf commented 2 years ago

On your side you could change return style to LayerData an

Not really. I'm talking about collections of (many) functions that return basic numpy types. Goal is to have the same functions called in napari, which can also be called from jupyter notebooks; using the same API. Side effect: code generation.

To make long story short: A function that is called detect_spots must return a list of points. Any other return type doesn't make any sense; at least from my limited software engineer's perspective...

haesleinhuepf commented 2 years ago

... and it would be cool if napari-core would make sure that results of functions returning PointsData are visible to the user; ideally in a pleasing size neither half a pixel, nor half the screen.

haesleinhuepf commented 2 years ago

This may allow you to solve this problem on your side and give us more time to find a proper solution.

I have solved that problem on my side (see link in my first post). I'm advocating here for finding a proper solution in napari core.

Czaki commented 2 years ago

It is much wider problem. Layers created from numpy array does not respect transform of existing layers (so functions that return PointData, LabelsData, etc are problematic if used such data).

So there is a need for general approach for returning data that need calculate part of properties base on viewer state.

psobolewskiPhD commented 6 months ago

For anyone following along at home, some of the API has changed, so here's a brief update -- also you can hit this issue if you have an image layer with a small scale, e.g.: [0.15 , 0.03529079, 0.03529079] So this is a real issue for data with very small voxels. SO, if you have such an image Layer and you make Points layer, then clicking to place points will appear that nothing is happening, but in fact the layer.data will show there are points. Currently, default canvas_size_limits are (2, 1000), so you'd think you'd always see a point, but alas by default, antialiasing makes it really really hard if you have a HiDPI screen. If you set viewer.layers[-1].antialiasing = 0 your point will be revealed! On my screen I need to set canvas_size_limits = (4, 1000) to reliably see points with antialiasing = 1. And they are still tiny!