Open chrisjsewell opened 7 years ago
Hi Chris,
thanks for sharing that with me, I always enjoy seeing how other people see ipyvolume and getting feedback is really useful for future development.
I'm assuming you installed it from github using pip install -e .
right?
If you run from the js dir npm install
, it will update ipyvolume/static/index.js
, then you need to refresh the browser/notebook. There are better ways for development mode, using webpack --watch
, I need to document that some day. The index.js
file contains the full source code, with all dependencies, so changing the original source without recreating this file indeed will not do much.
Try hacking it a bit, see if you can make a new a .view_angle trait. Copy what you see for eye_separation
and see if you see follow the logic. Fee free to ask questions on gitter and before you know it you have you first PR for ipyvolume :).
Lines are now supported in master, but not yet well documented. The scatter class has a connected property to draw lines between the points, set it to True and you should see lines.
Isosurfaces are really something I'd like to have, and threejs has support for it so it shouldn't be too complex to support. I like the idea of the plane, and happy to accept a PR for that 👍 😉 . No seriously, I don't think I can make that myself soon (other things higher on the priority list), but would be happy to help.
Cheers!
Yeh I'll see what I can do.
For now, I've worked out how to change from source :) (yes just need to refresh web-page after changes). Changing VIEW_ANGLE
to 5 and this.camera.position.z
to 20 improves the situation.
I installed it from pip, in a conda environment and just to note for prosperity:
One last, minor (maybe) bug, I noted is that, for certain values of data_min
, I get some rendering artifacts in the figure. You can see it in the new version of https://chrisjsewell.github.io/ipyvolume_si_ech3. Literally if I change the initial value from 1.6 to 1.61, it disappears.
FYI: There is also a three camera called "Combined camera" or something similar, that allows for easy switching between orthographic/perspective camera (while retaining the view direction/size).
I didn't know that, thanks for sharing that!
Hey @maartenbreddels , I will get round to making this orthographic camera eventually! But, for now, I thought I'd share that I've included an output from your project in my ipypublish package: https://github.com/chrisjsewell/ipypublish#embedding-interactive-html (hope this is ok).
The idea is to have a notebook cell with a static image of the widget in the output, and a path to the embed html in the metadata so that a) if you export to latex/pdf, you get the static image or b) if you export to html/reveal slides, you get the html. Works well and is awesome to have presentations with the ipyvolume renderings in :) https://chrisjsewell.github.io/ipypublish/Example.slides.html#/9
(odd, my last reply got lost, 2nd try)
Awesome work again, and interesting work on ipypublishing! I guess you know about pylab.savefig(), what I thought about is, that it should be possible for the 'screenshots' to be made in really high resolution (say 4 or 10x the resolution), for publication quality.
For the camera part, feel free to make a PR, it doesn't need to be merge ready, I can already give you some feedback and we can iterate on it.
cheers,
Maarten
Yeh sounds good :) I think I saw someone else mention it, but if you could add the option in savefig to output to a PIL.Image or similar, that would be helpful ta.
Really nice idea, had to implement that direcly! :) (see https://github.com/maartenbreddels/ipyvolume/commit/13ebff66e25628e12d32c4ad31154c408fc46122) Now you can do
p3.figure()
mesh = p3.examples.klein_bottle(uv=True)
Next cell:
mesh.texture = p3.screenshot()
and repeat the last cell many times 😉
You can now specify the width and height of the screenshot or figure as well (e74164a374d7e603ed203db96359a353925f42c7):
mesh.texture = p3.screenshot(width=100, height=100) # low res texture
#...
p3.savefig('fig1.png', width=1024*4, height=1024*4) # high res 4k plot
Perfect thanks :) and big fan of this idea as well jovyan/pythreejs#109
@maartenbreddels @chrisjsewell - this is a really awesome extension. i started using it for brain meshes and immediately ran into the projection issue. i tried a quick hack to uncomment the line in figure.js to_orthographic
, but that by itself didn't work. i'm coming at this from mostly a user standpoint, but if there were some pointers as to how it could be enabled, i'd be happy to go down that trail.
So far I have just put in the fov hook and am using that, if you just set fig.camera_fov = 1 then its basically like orthographic
great - that looks much better.
is there a way to:
figured out the answer to 1:
fig.xlim = (-100, 100)
fig.ylim = (-100, 100)
fig.zlim = (-100, 100)
for 2, the easiest way is:
fig.style = {'axes': {'color': 'black',
'label': {'color': 'black'},
'ticklabel': {'color': 'black'},
'visible': False},
'background-color': 'white',
'box': {'visible': False}}
thank you - that's super useful.
Hi Satrajit,
thanks for the positive feedback! For the bounding box, see also http://ipyvolume.readthedocs.io/en/latest/api.html#ipyvolume.pylab.xyzlim although I'm surprised it's not contained in the bounding box, as it automatically should, maybe a bug? Styling, although supported is a bit rough, will need to work on that and document it. Thx @chrisjsewell for answering!
cheers,
Maarten
Hey Chris,
Did you find any solution for an orthographic camera, specifically for setting the axes orientations of the camera? I am able to set the camera to different positions within the simulated environment, however, it is always oriented such that it looks toward the center of the simulated environment (cube). I would love to change that camera orientation such that it is aligned with the direction of a moving particle that I simulate. In other words, I would attached the camera to one of many simulated particles (position) and get this particle's view (orientation) on what's going on.
Thanks for any ideas! Florian
Hey Maarten, great package! I've been playing around with it to visualize electron densities from quantum computations; https://chrisjsewell.github.io/ipyvolume_si_ech3
However, the large hard-wired perspective angle is a bit of a pain for viewing trends along certain directions. So it would be great, ideally, to have the option for the Orthographic Camera, or at least the PerspectiveCamera VIEW_ANGLE (in figure.js) linked to a Figure trait.
On a related note, up until the last few weeks, I have no experience with JS. Apparently it is an interpreted language, but, if I try changing VIEW_ANGLE in the source code (figure.js), like I would for Python code, this doesn't change anything. Is there extra steps I need to take? Do I need to compile something?
FYI, for my work, other nice to haves would be;