Closed mallickrishg closed 1 year ago
Thanks for starting this @mallickrishg
- Converting select notebooks to the new numpy dicts, which you have shown beautifully. There is that last notebook pending main/refactor/examples_3_intersecting_faults_refactor.ipynb
Sounds good and I'm happy to do this. Can you merge your branch first so that we keep this development on main?
- Code cleanup - we need to get rid of all the older functions we don’t need anymore in bemcs.py and ensure that the function naming convention is uniform - lower camel case?.
I agree. I'm happy to do this after we're sure that we've converted the notebook above. I vote that we get really boring and simply use snake_case
throughout. That's the python standard (although lots of prominent libraries are different, e.g., Pandas). I'd also like to remove some of the unused plotting functions from bemcs.py
. We can always grab them from the history if we need them down the road.
- Something I wanted to do was to add in the functionality of evaluating stresses that are coincident with a given element without our current hack (of shifting it along the unit normal by 1e-8)
For the quadratic kernels, we shouldn't be using any hack for the on-fault evaluation. We should be using the coincident functions that due the calculation exactly on the fault. These are much smaller functions and so may be much faster.
Regarding speed. This is probably worth looking and it's just super interesting. There are really two things to understand. First is what's slow. Is is all the trig terms?
- by using numpy tensor products for operations instead of loops
Pro: No new libraries or such Con: Could be difficult to keep track of indices. Could end up being a large RAM model.
- incorporating just-in-time compilation
Pro: Trying some numba decorators only takes an hour or so.
Con: Numba can be hit or miss however it looks like trig functions and powers are supported with no-python
.
This should probably be our first step since it's so easy to try!
- cython to speed up loops
Pro: Can be done gradually and lots of help on the internet Con: We may have to be smart to get it close to metal.
maybe there are more ways
Pro: Modern Google powered speed on CPU GPU and TPU Con: Sorta different programming model. May not be faster at all.
Pro: It's FORTRAN so it's easy to write Con: It's FORTRAN and that means some potential issues with compiler availability...and it's hard to get help with FORTRAN these days.
Note 1. I'm not sure any of these are obviously going to be faster than numpy. If Numpy is calling out the SIMD-optimized C there's a very low probability that we're going to figure out something faster.
Note 2. To start these experiments, I think we should copy the quadratic kernel calculation out of bemcs.py
and into a notebook or script where we can work on this in a simple isolated setting.
Sounds good and I'm happy to do this. Can you merge your branch first so that we keep this development on main?
Done
I agree. I'm happy to do this after we're sure that we've converted the notebook above. I vote that we get really boring and simply use
snake_case
throughout. That's the python standard (although lots of prominent libraries are different, e.g., Pandas). I'd also like to remove some of the unused plotting functions frombemcs.py
. We can always grab them from the history if we need them down the road.
I agree with both - snake_case & remove unused plotting functions
For the quadratic kernels, we shouldn't be using any hack for the on-fault evaluation. We should be using the coincident functions that due the calculation exactly on the fault. These are much smaller functions and so may be much faster.
My bad. I was using the hack until now. My next task (next week) will be to do exactly as you said. Add a check for any coincident evaluations and use the other function.
Regarding speeding things up. I only know how to use numpy, all of the other options are new to me. Looking at the f_stress_displacement trig functions I can't think of a really straightforward way off the top of my head to avoid for loops entirely. That said, I think if I spend some time looking at it I might be able to find a way.
Kind of ironic. I'm supposed to be young blood, and yet all I suggest are really old ideas. Need to up my game :D
Regarding speeding things up. I only know how to use numpy, all of the other options are new to me. Looking at the f_stress_displacement trig functions I can't think of a really straightforward way off the top of my head to avoid for loops entirely. That said, I think if I spend some time looking at it I might be able to find a way.
I really don't know what's expensive here. I don't know if it's looping overhead or trig functions or what. I have no clue. Getting an isolated example and then working on that is the way forward. For me, the first code tasks, are finishing the conversions to els
, deleting old unused notebooks, cleaning up bemcs.py
, and getting the triple junction to work. I want to consolidate the foundation!
@brendanjmeade I haven’t been able to spare much time bemcs this week, and I won’t be able to spare much time to code till next Thursday. In the meantime I wanted to organize a new thread with important tasks for the next two weeks.
Aside from these tasks, there is some opportunity for us to speed up some of the computations. Though, this is not an immediate need. The stress and displacement kernel evaluations are done with for loops and are quite slow. There are a few clever ways to speed these up in python (i am told by a really smart grad student I work with), and perhaps you know more. I am just laying out the absolute edge of my knowledge here. Let me know what you think.