ComputationalRadiationPhysics / isaac

In Situ Animation of Accelerated Computations :microscope:
http://ComputationalRadiationPhysics.github.io/isaac/
GNU Lesser General Public License v3.0
25 stars 15 forks source link

Bullet Time: Multiple Cameras #2

Open ax3l opened 8 years ago

ax3l commented 8 years ago

Feature for the future

A paper from 2013 that we all love is: http://arxiv.org/pdf/1301.4546v1.pdf

While this can be realized in PIConGPU with multi-plugins (registering ISAAC multiple times) it might also be interesting for the sake of bullet-time all-the-things to abstract that already in ISAAC so that multiple instances of cameras, working on individual sources with each their individual functor chains and individual transfer functions, create streams.

@psychocoderHPC just noted that for 3D rendering this would also be beneficial since the two cameras have a fixed position dependency (but everything else stays the same, so some resources might want to be shared).

theZiz commented 8 years ago

Hm, the paper describes an interesting way of postprocessing data with prebuild videos. However the main idea of isaac is to view live data, not to save it for later™.

ax3l commented 8 years ago

That's what you™ think it is. But later™ could also be "in 5 minutes" in a synchronous workflow working on over 9000™ "live" simulations in parallel.

theZiz commented 8 years ago

But in the typical isaac workflow, you would also like to use the functor chains or change the render mode from raycast to iso surface rendering. This would not work with your proposed approach.

ax3l commented 8 years ago

of course you can still initialize or change them or take control again. what stops a user from letting the sim keep the settings and recording the output until further instructions arrive? Also a listening bot that is monitoring certain meta-data could do the steering (e.g., move the camera focus to a region of interest adaptively or change the min/max values).

live + interactive does not mean a human has to invest time into every detail. if a user does not like the output, he/she will just change it interactively for some time steps, and then might start a new recording for 5min to say 1hr to have a look at a similar simulation set in the meantime.

theZiz commented 8 years ago

The stereo branch of isaac - which will be merged soon - introduced a controller and a compositor class. The first class steers to projection matrices and decides, how many render passes are needed. The compositor merges the passes to one image. Atm default classes for the already existing behaviour and for stereoscopy are implemented - especially a compositor for sidebyside and one for anaglyph compositing.

This improvment can be used for your suggestion, too - of course with new controller and compositors classes. However the recent implementation is not capable of doing thousands or even millions of renderings every time step (!). So I will close this issue as it is just not possible for isaac to do this in decent time. But the tools are there! Feel free to implement a controller and compositor, which makes such an insane amout of renderings - plus a client, which is able to choose the right view depending on the camera settings.

ax3l commented 8 years ago

Stereos-copic: super awesome, great work! :sparkles:

This issue: I think we should leave this open for future consideration.

Methods like the ones of multi-camera pre-rendering in ParaView Cinema (github) will definitely the only ones that are scientifically able to reduce volumentric data to a still useful amount in current and future applications and I do not want that to get lost. From the number of renderings, a few dozen cameras might already be sufficient.

theZiz commented 8 years ago

The ParaView Cinema example is impressive, but it did the same I did with PIConGPU: It chose a simulation, which "accidentally" fits perfect the visualization approach. I doubt ISAAC is able to visualisize this climate simulation (if the scene is not disjunct distributed), but I also doubt, that ParaView Cinema is useful with PIConGPU. The globe has the big advantage, that the only transfer operation needed is a rotation. Zoom in and out seem "just" to be a zoom in and out in one existing video. However with PIConGPU you may also want different rotation pivots and different clipping planes. Nobody is interested in the inner of a globe in a climate simulation. In PIConGPU you are.

So what you want is a possibility to define n cameras with possibly different scene settings (including functor chains), which all render images, and where you can select a view in the client - or create new one, if the existing ones are not enough. First of all, that is not the approach of ISAAC atm. So a whole rethinking of ISAAC needs to be done. Secondly your simulation will get slow. If every render kernel needs about ~50ms, even "a few dozen cameras" will make one renderstep last for seconds. Per timestep! So this would make the simulation incredible slow. It seemed to me, that the preview video of Paraview cinema does not draw every timestep. Do you want this? I always thought the main idea of ISAAC is: Not loosing any scientific data with watchting the simulation as it runs and to zoom in to interesting features and filter out rubbish, which is imported for the simulation, but not for me as observer, right as it is simulated.

Tbh. this is a very very huge project. This could be another diploma thesis - if not even a PhD thesis. Furthermore Paraview Cinema (and others?) already did some nice work and it may be a better idea to just use other peoples work, instead of inventing the wheel again. In my eyes ISAAC has some very nice usecases in fast live preview and live steering of simulations or other high rate data sources + the abbility to get some live meta data.

tl;dr: Let's apply the KISS principle and let not add dozens of new features to isaac, which dilutes it's real purpose: Doing fast live rendering, live steering and live meta data extraction - and which are already very well done by other tools.

bussmann commented 8 years ago

I agree to keep this open for now.

ax3l commented 8 years ago

It's a bit too long to describe my full vision here, but let me give you some short in-line comments:

but I also doubt, that ParaView Cinema is useful with PIConGPU. The globe has the big advantage, that the only transfer operation needed is a rotation.

that might already be enough for a lot of simulations, e.g., just visualizing the 10 n_c and 1 n_c iso-surface line of density together with the 5 a_0 surface line of laser intensity.

what you want is a possibility to define n cameras ... First of all, that is not the approach of ISAAC atm ...

ISAAC "just" offers the binding, transport and renderer (from a user perspective), a user's workflow should be able to be as free as she/he can image it in her/his scientific setup :statue_of_liberty:

Secondly your simulation will get slow. If every render kernel needs about ~50ms ... It seemed to me, that the preview video of Paraview cinema does not draw every timestep. Do you want this?

Then I might just render every 50th image or just go for a very specific data source to push it down to 30ms. Just for comparison: writing a data file every 500 steps with all information would be ideal. Unfurtunately, on a 8000 GPU example, a PHDF5 write will take 25min and an ADIOS write up to 5 min. If we render below half a minute every 30 minutes of simulation we are good to go.

Of course, keeping the renderer as fast as possible will be key to render high frequency and high number of cameras.

I always thought the main idea of ISAAC is: Not loosing any scientific data with watchting the simulation ...

You are already loosing information while rendering a simulation since ray-casting is a irreversible operation on the data a user is interested in. The trick is to find means to derive the (derived) quantities one is interested, in a representation a user can understand and a HPC system can process.

Since scientific exploration is a process of iterative progression (an implicit solver that never converges but we decide at some point that we think we understand what we see) using serveral swipes over a data set (you say: pause, pinch & zoom) is necessary, meaning to use different view points (cameras), representations (functor chains + iso + cb + ...) and times (going back & forth) is very natural.

Tbh. this is a very very huge project. This could be another diploma thesis - if not even a PhD thesis.

I am less talking less about a project but about a workflow that we are aiming for. This might require additional libraries build on top of ISAAC or to make ISAAC more flexible, depending on the orthogonality of the tasks. Yes, your master thesis was integrated in this great endeavour :rocket:

Let's apply the KISS principle and let not add dozens of new features to isaac, ...

As said, depends if some of those topics are actually "new" features. Making the number of cameras a non-constant and letting the user set camera-position/functor-chain defaults seems quite natural to me and would bring some very useful scientific workflows to life.

A last general comment: for the size of simulations we are doing, being "online" when a simulations runs is nothing one can do in a lot of cases. (Maybe in the future, when bash-systems are re-invented, etc., but that is nothing we can work with in the next years and I like to get work done when I/O is becoming more and more infeasible.) That means, after queuing a large-scale job for several days (weeks) and getting it to start at 3am in the morning, one will not be able to play with it. This is a scenario we should cover, too.

psychocoderHPC commented 8 years ago

If I understand the discussion correct the question is if we need more than two camera views? My answere is yes. We need this not only for bullet view there are 3d TV Screens at the market were you can view real 3d without glasses. To use some of this screen you need two render six different camera views. This is nearly bullet time but for live view. The ZIH at the univestity dresden had such a screen available. IMO this had no high priority but would something for the ZIH booth at the Supercomputing 2016 which no other company shows.

theZiz commented 8 years ago

First to the easy to answer question from @psychocoderHPC; Rendering 6 different images for your named kind of stereoscopy would already be possible: I could add this in 5 minutes with my new controller and compositor system. ;)

@axel: I understand you. Let's see, what is possible in the future. ;) So what definitively needed is a way to have more than one indepened (!) camera (atm all parameters except the projection matrix are shared) and if e.g. 100 images are rendered, not only one nodes should compress them, but round robin like every one.

ax3l commented 8 years ago

yes and yes :sparkles:

psychocoderHPC commented 8 years ago

Am 11. April 2016 20:54:45 MESZ, schrieb Alexander Matthes notifications@github.com:

First to the easy to answer question from @psychocoderHPC; Rendering 6 different images for your named kind of stereoscopy would already be possible: I could add this in 5 minutes with my new controller and compositor system. ;)

That would be nice, if you interested to test this I can bring you in contact with the right visualisation guys at zih. We can discuss this offline at work.

@axel: I understand you. Let's see, what is possible in the future. ;) So what definitively needed is a way to have more than one indepened (!) camera (atm all parameters except the projection matrix are shared) and if e.g. 100 images are rendered, not only one nodes should compress them, but round robin like every one.

This round robin compression would nice for all rendered views with more than one camera.


You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/ComputationalRadiationPhysics/isaac/issues/2#issuecomment-208498007

Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

bussmann commented 8 years ago

I'm starting to like this discussion. I will show you guys the VOXELS proposal from 5 years ago at one point.

theZiz commented 8 years ago

@psychocoderHPC: Yeah, of course, if I add it, the already working stereocopic feature would benefit too if one simulation step needs less time than compositing and compressing the image. ;)

bussmann commented 8 years ago

Which it can easily do due to strong scaling.

ax3l commented 8 years ago

but strong scaling the simulation will increase the compositing time ;)

theZiz commented 8 years ago

I doubt so as the image size should stay the same?

ax3l commented 8 years ago

Even when keeping the final image size and the volume to be cast though constant, the number of processors will decrease the binary-swap algorithm speedup that is compositing the final images in ice-t.

nevertheless, you still need to get in the render-time ~ compositing time region, just saying it does not scale strong as the simulation does in this specific aspect.

theZiz commented 8 years ago

Interesting. I did a test of this in my diploma thesis. The binary-swap algorithm was getting faster and seemed to converge to a constant time needed for compositing the whole image. However I did only test with a maximum of 64 ranks. ;)

However with strong scaling the locale volume per node shrinks, so less steps are needed in the raycast. So even if the IceT compositing needs more time, the raycast will definititvely need less. I wonder, who wins the fight...

ax3l commented 8 years ago

I think in absolute time this should go quite far since compositing is still really fast. But we should be aware that scaling to 1(0)'000th of ranks might bring this surprise.

bussmann commented 8 years ago

Can't sleep, must see the workings of my trolling unfold...

ax3l commented 8 years ago

:trollface:

theZiz commented 8 years ago

What do you want me to do?  LEAVE?  Then they'll keep being wrong!