Open ax3l opened 9 years ago
pyDive has a quite large overlap with yt however, I agree, it is way more simple i.e. not designed for special purposes. It's mission goals are not clear by now, there are several directions one could push the development, depending on what to focus on. Focus on:
yes that's pretty much what I am thinking, too. we could go for a follow-up strategic meeting in April.
I really like the point "pyDive + generic-distributed-arrays -> interoperability with other projects (PIConGPU, yt) for live-processing"
Just to re-focus on the urgent goals: I am a bit concerned that features like ghost layers and mappings are going way off-topic for this project (nor are they needed) and it will never become feature complete in the essential routines it actually has it's strengths in.
Also GPU-distributed arrays are nice, but rather low priority. To prove or even develop "production codes" is imho even lower priority and should not be part of the project (nor a goal), but rather the direct RAM-binding (#4 #6) to actual codes such as PIConGPU (and other codes such as Warp) and pyDive's use in live and post-processing (#5 #14 #16 #18).
It's right, for two month of development now I am adding optional (!) optimization capabilities to pyDive. This is an experiment to see if there is potential to come into the region of C-speed and I am optimistic (I know, I am always optimistic ;) ) for that. This is not about actual reaching C-speed by planless low-level optimization, but giving two specific, high-level optimization stategies a try, namely "lazy evaluation" and "ghostlayering". For my defense I have to say, that I finished the "generic-distributed-arrays" milestone, which is neccesary for RAM-binding to PIConGPU and other codes, before I've began the optimization branch. Furthermore I think it's good to talk about all that face to face.
with lazy evaluation I am with you (even if I think it's not very urgent). I am not sure if ghost layers are really necessary for now, but can be useful for some mesh operations like stencils for div
calcs, etc. (but still currently rather low priority)
great to hear the #6 works now (required for #4) :)
yes we should continue again this week when you are at the lab, so we can progress faster and get the full picture.
Do we have common mission goals and can we create interoperability with the yt project?
Paper (ArXiV) Section 5 is pretty close to what we do.
Nevertheless, pyDive should not try to implement arbitrary rendering and special purpose analysis as yt does (pyDive is a library and yt a full flavoured framework). Probably the best usage of pyDive is to combine it with parallel codes, to allow fast MPI based post- and live-processing and to forward visualization tasks, e.g., to paraview transparently.