Closed noahbenson closed 5 years ago
Maybe you hit the 10mb limit: http://www.tornadoweb.org/en/stable/releases/v4.5.0.html#tornado-websocket Try downgrading tornado tot 4.4
I'm currently running tornado 4.3, so that seems unlikely.
Could it be that you hit the 16-bit limit for vertex indices of WebGL 1? Are there any debug output in the browser console (you might have to turn on verbose messages as well).
There are definitely more than 65k vertices in the larger mesh, so that could be--how do I turn on verbose messages?
The output in the javascript console for the smaller mesh was just:
THREE.WebGLRenderer 90
For the larger mesh that fails to appear (image copied because the text copied as a giant single-line mess):
Also, I was able to get ipyvolume's plot_trisurf()
(which I believe uses pythreejs?) to render the larger mesh last night using the same faces/vertices/colors.
I'll look into this in more detail later, but for now: Did you try using BufferGeometry
instead of Geometry
? The issue might be related to the large lists of vertices/faces that are being sent. BufferGeometry transfers as binary data instead of lists, so it should be more efficient either way. This is probably more of a workaround then a resolution, but it will help with debugging either way.
Just tried it--if I just replace Geometry
with BufferGeometry
then neither mesh displays (again, just a white cell appears). No errors appear in the JS console for either mesh when using BufferGeometry
, though.
@noahbenson I was trying to reproduce this today, but I was not able to decompress the file. Would you mind trying another file sharing service?
Sure, sorry about that. Try this one?
404 file not found 😅 Try https://filemail.com ?
Wow, sorry! Here's the filemail link: https://fil.email/7tebGAKA?&showconfirmation=true
Testing this, it seems that the simple Array.map()
is struggling with such a large array? It's really confusing. However, BufferGeometry works. Here's the modified loading code:
brains = {}
for resolution in [32, 164]:
data = {}
with h5py.File('./brains/brain%dk.hdf5' % resolution) as f:
data['vertices'] = np.array(f['vertices'], dtype=np.float32)
data['faces'] = np.array(f['faces'], dtype=np.uint32).ravel()
data['colors'] = np.array(f['colors'], dtype=np.float32)
brains[resolution] = data
And the plotting code:
def plot_brain_3D(data):
vertices = p3js.BufferAttribute(data['vertices'])
faces = p3js.BufferAttribute(data['faces'])
colors = p3js.BufferAttribute(data['colors'])
geo = p3js.BufferGeometry(
index=faces,
attributes=dict(
position=vertices,
color=colors
)
)
mesh = p3js.Mesh(
geometry=geo,
material=p3js.MeshLambertMaterial(vertexColors='VertexColors',
side='DoubleSide'))
cam = p3js.PerspectiveCamera(position=[1000,0,0], up=[0,0,1], fov=12)
cam.lookAt(np.mean(data['vertices'], axis=0).tolist())
scene = p3js.Scene(
children=([mesh, cam, p3js.AmbientLight(color='white', intensity=0.8)] +
[p3js.DirectionalLight(color='white', position=pos, intensity=0.6)
for pos in [(100,100, 100),(-100,100, 100),(100,-100, 100),(-100,-100, 100),
(100,100,-100),(-100,100,-100),(100,-100,-100),(-100,-100,-100)]]),
background='white')
renderer = p3js.Renderer(camera=cam, scene=scene,
controls=[p3js.OrbitControls(controlling=cam)],
width=6*72, height=6*72)
return renderer
I guess an actionable item in pythreejs would be to add some validators for the standard BufferGeometry attributes (position/normal/color item size of 3, and index item size of 1, + check float vs uint dtypes).
Also, the 16-bit index thing is only an issue on outdated devices, threejs will enable the 32-bit index buffer extension if available.
Note: If need be, the ravel code for faces can be called inside the plot function without much overhead, but the dtypes should preferably be specified on load.
Great--seems to work! Just a documentation issue then. Thanks!
If you have some suggested changes to the documentation feel free to open a PR :) I try to keep close to the three.js API, and rely on its docs instead. Any suggestions for how to bridge that gap more successfully are very welcome!
Yeah, that seems wise--I think for people like me who aren't familiar with javascript, it is a bit difficult to get a grasp on the organization of things (though I found that the examples were very helpful and got me started quickly!). I need to spend some more time learning the library but will try to document my efforts for a PR :-) Thanks for your help!
Closing this issue now as resolved. If you still have suggestions for the docs, feel free to let me know here or open a PR :)
I've been experimenting with the use of pythreejs to render 3D triangle-mesh models of brains in jupyter; everything works fine for low-resolution models (tested up to about 60k vertices), but for larger models, the display is empty when rendered. I've put the models I've been using for tests on filedropper in hdf5 format; I can reproduce the problem by doing the following:
~/Desktop/
, then in bash:tar zxf brains.tar.gz
(creates data in directory~/Desktop/brains/
)Next, make a function to plot either mesh:
I've also tested an intermediate-resolution mesh (~59k vertices) that displays fine.
I initially thought that this was a data-transfer issue, but I have set the following line in my
~/.jupyter/jupyter_notebook_config.py
file:c.NotebookApp.iopub_data_rate_limit = 500000000
.The meshes should only substantively differ in that the latter mesh is a higher-resolution version of the former--Am I doing something wrong here, or are there fundamental limits to the complexity of what Jupyter can display using pythreejs?
(Apologies if this is user-error; I tried stackexchange first, but even with a bounty got no answers, and haven't been able to find any documentation about this.)