Closed yxdragon closed 3 years ago
now it is ok, but I did not know well the detail. it is a rough fix. So I over write a new class
class VolumeVisual(scene.visuals.Volume):
def set_data(self, vol, clim=None, copy=True):
scene.visuals.Volume.set_data(self, vol.transpose(2,1,0), clim, copy)
def _compute_bounds(self, axis, view):
return 0, self._vol_shape[::-1][axis]
it is just ok, solve my problem. but waiting for a better solution!
It is hard to know if this is the right or best fix without seeing your actual code used to generate the original visualization. Can you provide a small example that doesn't use any external application like ImagePy?
from vispy import scene
import numpy as np
import scipy.ndimage as ndimg
from vispy.visuals.transforms import STTransform
from vispy.geometry import create_sphere
canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', size=(800, 600), show=True)
view = canvas.central_widget.add_view()
imgs = np.zeros((100,200,300), dtype=np.float32)
imgs.ravel()[np.random.randint(0,100*200*300,1000)]=1
imgs = ndimg.gaussian_filter(imgs, 3)
scene.visuals.Volume(imgs, cmap='hot', parent=view.scene)
scene.visuals.Isosurface(imgs, level=0.001, shading='smooth', parent=view.scene)
#sphere1._mesh.mesh_data_changed()
cam = scene.TurntableCamera(elevation=30, azimuth=30)
view.camera = cam
if __name__ == '__main__':
canvas.app.run()
ps: infact ImagePy do nothing, but manage the data, active the parameter, and call the vispy's visuals. So that we can do many kinds of test without coding. (I wrote ImagePy 's 3d canvas by moderngl, now I want to replace it with vispy, now I am doing some test, and I would report anything unexcepted and some feature I want)
now I want to replace it with vispy, now I am doing some test, and I would report anything unexcepted and some feature I want)
Very cool. Thanks for filing all these issues. Even if I/we don't know how to fix them, it is nice that we know about them.
So my question is: which Visual is wrong here? Is it the volume or is it the isosurface (a MeshVisual underneath)? I have a feeling this is some function or visual assuming that arrays are (x, y, z) when they are probably more likely (y, x, z) (a.k.a. (rows, columns, z)). @almarklein @rougier Does any of this sound familiar? The isosurface use an isosurface function that probably came from one of you:
Oops, nevermind git blame says @campagnola added it.
@alisterburt @sofroniewn Have you noticed this in napari at all? It seems the volume is going in the wrong direction? It looks like the volume is treating the array as (z, y, x) but the Mesh (or at least the isosurface) is (x, y, z).
But it seems that volume is wrong, not only direction.
It could not be auto center located. It is located by the box of [(0,0,0), img.shape].
I don't understand. If I transpose the data as you have, but do it before passing it to the VolumeVisual then I think it looks correct.
from vispy import scene
import numpy as np
import scipy.ndimage as ndimg
from vispy.visuals.transforms import STTransform
from vispy.geometry import create_sphere
class VolumeVisual(scene.visuals.Volume):
def set_data(self, vol, clim=None, copy=True):
scene.visuals.Volume.set_data(self, vol.transpose(2,1,0), clim, copy)
def _compute_bounds(self, axis, view):
return 0, self._vol_shape[::-1][axis]
canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', size=(800, 600), show=True)
view = canvas.central_widget.add_view()
imgs = np.zeros((100,200,300), dtype=np.float32)
imgs.ravel()[np.random.randint(0,100*200*300,1000)]=1
imgs = ndimg.gaussian_filter(imgs, 3)
# not center located
scene.visuals.Volume(imgs, cmap='hot', parent=view.scene)
# I modify the bounds' order, center located (befor open the next line please close pre line)
# VolumeVisual(imgs, cmap='hot', parent=view.scene)
view.camera = scene.TurntableCamera(elevation=0, azimuth=0)
if __name__ == '__main__':
canvas.app.run()
Here is a demo, I set the camera's elevation and azimuth to 0. then the camera would count the view matrix by object's bounds to center the scene.
from vispy import scene
import numpy as np
import scipy.ndimage as ndimg
from vispy.visuals.transforms import STTransform
from vispy.geometry import isosurface
canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', size=(800, 600), show=True)
view = canvas.central_widget.add_view()
imgs = np.zeros((100,200,300), dtype=np.float32)
imgs.ravel()[np.random.randint(0,100*200*300,1000)]=1
imgs = ndimg.gaussian_filter(imgs, 3)
# could not be center located
# scene.visuals.Isosurface(imgs, level=0.001, shading='smooth', parent=view.scene)
# ----------------------------------------------------------
# center located
vts, fs = isosurface.isosurface(imgs, 0.001)
scene.visuals.Mesh(vts, fs, shading='smooth', parent=view.scene)
view.camera = scene.TurntableCamera(elevation=0, azimuth=0)
if __name__ == '__main__':
canvas.app.run()
And I found many other MeshVisual extended visuals could not be center located.
I think you may need to swap the order of the columns in vts. As David mentioned, the order could be zyx, while Mesh assumes xyz. Would also be interesting how this compares to skimage.measure.marching_cubes
.
@yxdragon Wow, this is good stuff. And a little scary how no one has complained about this in recent years. So a couple things I've stumbled upon while playing with your examples:
tranpose
before passing the data would have made the bound computation "just work", but it doesn't. This says to me that the shader code in the VolumeVisual is drawing things differently than the Visual python code expects. Or at least that's one way of looking at it. @almarklein @rougier @alisterburt Is the way VisPy's VolumeVisual treats the array "standard" in the OpenGL/visualization world (that a 3D numpy array is treated like (z, y, x))? Put more simply, is this volume behavior expected?iso = scene.visuals.Isosurface(...); iso._prepare_draw(iso)
. The long term solution is probably to change the isosurface so it sets the data as soon as it can or overrides the _compute_bounds
so that the data is set if bounds are requested.So, my OpenGL experts (@almarklein @rougier and anyone else who wants to be placed in my internal grouping of contributors and their skills), what is the right fix here? Is the VolumeVisual wrong because it is inconsistent with the MeshVisual? Or is the VolumeVisual correct because it matches some general "standard" for how VolumeVisual is represented in an array and then drawn?
be interesting how this compares to skimage.measure.marching_cubes
same as vispy's one, but several times faster (implemented in cython).
correct because it matches some general "standard"
just my own opinions: there is no standard, such as left hand and right hand coordinate, But it is important to accord the same rule in a repo.
@alisterburt @sofroniewn Have you noticed this in napari at all? It seems the volume is going in the wrong direction? It looks like the volume is treating the array as (z, y, x) but the Mesh (or at least the isosurface) is (x, y, z).
I havn't read every comment in detail in this thread, but I believe we did run into stuff like this in napari and we have to reorder the mesh / volume to get the overlay to be correct. We also do some offsetting to get the (0,0,0) point to be at the "center" of a voxel instead of the top left corner. These were conventions we wanted to adopt in napari to ensure everything was standardized in our representation (we settle on something more akin to z, y, x) and I think are happy with the behavior.
If you see an opportunity to standardize/ improve the default behavior in vispy just let us know. I'm not sure right now if we're incurring a performance hit for the transposes/ reorderings, but if so it might be nice for us to eliminate them too
These were conventions we wanted to adopt in napari to ensure everything was standardized in our representation (we settle on something more akin to z, y, x) and I think are happy with the behavior. If you see an opportunity to standardize/ improve the default behavior in vispy just let us know.
I think that's what we're saying here. VisPy should be fixed for 0.7 to treat the array like the MeshVisual does (x, y, z
). Long term we need to update the shader I'm guessing (I haven't looked how hard that will be). Short term we can probably move the transpose inside the VolumeVisual and fix the bound computation like @yxdragon has done. This should probably be done after #1912 is merged.
TL;DR: I think what you're seeing is correct, but it would be nice to add the support to let image and volume visuals follow the xyz
convention.
I don't think there's anything wrong with how the data is rendered. The order of dimensions can be a bit of a pain-point though, because there's two conventions you can follow for images and volumes. Most images are stored with the x being the fastest changing dimension, and in effect, an image of width100, and height 150 will have a shape (150, 100)
. In vispy we have extended this for volumes, so a volume's shape is assumed to be (depth, height, width)
. In other words, Vispy's image and volume visuals follows the zyx
convention.
If we assume/adopt this convention, than the isosurface functions returns an array of vertices which are ordered zyx
too. So when passing these to the mesh visual, we need to do vts = np.fliplr(vts)
first. It may feel weird at first, but it follows from using the zyx convention for images/volumes.
I have also worked with cases where we had data that was stored in xyz format. In this case we had a volume-visual-like object that had a property to allow either zyx
or xyz
shaping. My suggestion would be to add support for that. On the short run it would be good to document in the volume and image visuals that they assume zyx
data.
@almarklein the volume visuals give a wrong bounding box. it render in xyz, but give the bound in zyx.
What do you mean it renders in xyz? It assumes zyx order in its rendering. The bounds(axis)
method has the axis arg which must be 0, 1, or 2. When we're considering zyx order, that will thus reflex z
, y
and x
.
If we'd add support for xyz
order to the VolumeVisual
, then the bounds(0)
will mean the x
dimension.
This easy to get confused though ... let me know if its not yet clear :)
@almarklein What do you mean it renders in xyz?
from vispy import scene
import numpy as np
import scipy.ndimage as ndimg
from vispy.visuals.transforms import STTransform
canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', size=(800, 600), show=True)
view = canvas.central_widget.add_view()
imgs = np.zeros((100,200,300), dtype=np.float32)
imgs.ravel()[np.random.randint(0,100*200*300,1000)]=1
imgs = ndimg.gaussian_filter(imgs, 3)
scene.visuals.Volume(imgs, cmap='hot', parent=view.scene)
view.camera = scene.TurntableCamera(elevation=0, azimuth=0)
if __name__ == '__main__':
canvas.app.run()
not center located, because the bounds order.
@almarklein I understand what you are saying and that is a very good reason for why the VolumeVisual does the rendering the way it does.
What @yxdragon is trying to point out is that although it does it as (z, y, x) in the rendering, the bounds computations are actually reversed (x, y, z) if I'm remembering correctly. This results in cameras (or anything else using bounds) to miscalculate things like the center of the volume.
I've implemented a partial fix in #2070. This fixes the _compute_bounds
call for the data/visual as-is and adds information to the docstring describing the orientation of the data. This does not fix the case where a user assumes that the VolumeVisual's data can be organized as (x, y, z)
but it lays the "ground work" for a rewrite of the VolumeVisual shader that might be able to handle both x/y/z and z/y/x or an automatic transpose as a convenience for the user; which ever someone wants to implement (won't be me, PRs welcome).
Ah, I stand corrected. I was not aware that _compute_bounds
was supposed to have axis=0 meaning x
. Now it makes sense :)
I was not aware that _compute_bounds was supposed to have axis=0 meaning x.
I don't think anyone did. :wink:
solved! I think this issue could be closed!
I guess I'm OK with closing this, but we still haven't implemented the VolumeVisual to work with (x, y, z) volume arrays. Isn't that what you originally wanted @yxdragon ? Or is it OK now to transpose the data before providing it to the VolumeVisual?
Let's follow-up in #2077
I draw a volume, a isosurface, and a box by a same data. (the demo data stent)
I found the volume does not match with box and isosurface. (and the volume can not auto center and auto fit the scene)
So I transpose the image befor volume render. img.transpose(2,1,0), then it matched but can not auto center.
then modify volume's _compute_bounds method
now it is shown in the center. every thing is OK.