Closed brendanjmeade closed 9 months ago
Your suggestions sound very reasonable to me. Some questions below -
- I use pylance in VSCode and have never used or heard of black (i am not super tech savvy) - is it any different or is there something specific about black that will help us work together?
Pylance helps you code in Vscode and Black formats your code. There's a great VSCode plugin that works on both .py
and ipynb
files. Is really useful because it erases preferences that either of us may have. You like 2 spaces and I like 4? Doesn't matter, Black has a rule. Different thoughts on where to break long lines of code? Doesn't matter, Black has a rul efor this. All sorts of microdecisions go away with an autoformatter and it allows us to focus on content. It also develops and shared understanding of how things should look so that reading someone else's code gets easier and easier because of common formatting. On the other hand autoformatters are boring because they take some of the personal style out of code. I say use it unless you really want to bring your own style to the code base!
- I know there is a lot of magic number stuff with the linear operator and boundary condition vector construction. I think we might want to have a chat about how to deal with this. I put it in there because I didn't put the effort in to generalize because I wasn't sure how we would structure our matrices - alternating shear and normal components OR all shear together and all normal together .... and so on. Once we do, many hard coded numbers will disappear.
Totally agree that's how I'm writing right now too. I did stumble when trying to put the right rows in for the mean slip approach. It can very much be a down-the-road-as-we-go type of thing.
- I am okay with a monolith. My suggestion is that we don't put the various plotting functions in the bemcs module though.
I monolith it is then. Why not put plotting functions in there? We'll always know where they are if we do. Happy to consider this.
- The next one might need some compromise - I personally like classes, and don't enjoy the ultra-freedom of lists. I propose to make 'elements' into a class. The advantage is I can call attributes such midpoints or normal vectors for all elements at one time instead of looping through them. But I can be persuaded to back off.
You are now leading development, so I'll respect your design choice. But first, let me make my case that classes are unnecessary, if not bad. Some thoughts:
self
be gone!) so much so that for data-only classes (as we have) the @dataclass
decorator was introduced. I can live with a @dataclass
approach to this.Again, if you are really feeling classes, then go for it, but elements
as a list of dicts with .
field accessibility using addict gives us all the data access and construction we could ever want with none of the class machinery. Ok, I've made a case now and I hope it's at least interesting to you. Again, as you are driving development and you have a good case for classes then let's hear that and go for it!
- Regarding the name of the package what are your thoughts on Quadratic-node 2-d Boundary Integral Equation Solver (Q2-BIES)? Just trying to make something catchy
I was going to go in the opposite direction: A name that was a little bit more general. I don't want to the name to define the scope today and not a month from now. Maybe keeping bem
but looking like a regular word and being easy to type. bemuse
?. It's easy to type, has bem
in it, and reflects our opinions of the BEM world ;) Sticking with bemcs
is a fine option too as this topic may be more effort than it is worth.
@brendanjmeade to be honest, my knowledge about what are good and bad coding practices are not well formed. I am counting on you to help us with this. This might be the first time I have actually worked on a coding project collaboratively.
My only reason for suggesting classes was because I was not able to easily access element mid points and the normal vectors as arrays, and I thought making an object out of elements would solve that problem. I used to know how to use C++, then MATLAB and now I'm scraping by with Python - so you see my class baggage. Additionally, I hesitate from using more libraries than I have to - that's just because Python is overwhelming and I can't keep up with things that change so quickly. So when it comes to decisions about how to develop useful software I defer entirely to you and your judgement.
to be honest, my knowledge about what are good and bad coding practices are not well formed. I am counting on you to help us with this. This might be the first time I have actually worked on a coding project collaboratively.
Just because I have opinions doesn't mean I'm right! See below for a bit more info.
My only reason for suggesting classes was because I was not able to easily access element mid points and the normal vectors as arrays, and I thought making an object out of elements would solve that problem.
What about this for getting element midpoints (or any other scalar quantity)
def get_field(field_name, elements):
return np.array([element[field_name] for element in elements])
x_centers = get_field("x_center", elements)
y_centers = get_field("y_center", elements)
I used to know how to use C++, then MATLAB, and now I'm scraping by with Python - so you see my class baggage. Additionally, I hesitate from using more libraries than I have to - that's just because Python is overwhelming and I can't keep up with things that change so quickly. So when it comes to decisions about how to develop useful software I defer entirely to you and your judgement.
I hear you. I did everything in C as an undergrad and Matlab in grad school and for many more years. I really like Julia as a language for science but the popularity of python means it'll be around for a long time.
Again, my goal is to keep things easy to discover, develop, extend, and do unknown future things with. In terms of libraries we can spin up an environment.yml
and build a proper conda environment for this but we don't have to do that now.
However, we do have to decide on a data structure. Do you want list of dicts, dataclass, or class?
Let's go with a list of dicts
Sounds good.
I'm also looking to do a few more clean up things:
environment.yml
. It'll probably include way more packages than we need right now and based off the one from celeri
but doing this will allow us to have a reliable environment.exploratory
(basic
, or foundations
) where we can move some of the notebooks that we initially explored ideas in but aren't really updating anymore.Also, let's decide on a name for this package. Currently, there are three contenders:
bemcs
: Pro: It's already in use and easy to type! Con: Meaning could be limiting (if not misleading!).bemuse
: Pro: Easy to type and whimsical. Con: Doesn't say what software does.Q2-BIES
Pro: Very descriptive. Sounds smart. Con: Hard to type. Meaning could be limiting.Thoughts?
After a Sunday at the movies, I have a couple more suggestions that might be more informative about the package
I like BESCT more
I'm also looking to do a few more clean up things:
- Add and
environment.yml
. It'll probably include way more packages than we need right now and based off the one fromceleri
but doing this will allow us to have a reliable environment.- Add install directions to README.md
- Create a folder named
exploratory
(basic
, orfoundations
) where we can move some of the notebooks that we initially explored ideas in but aren't really updating anymore.
That sounds great!
Both of these are very creative. There is an issue with besct
as it would be pronounced "biscuit" and that has slang connotations that should be avoided.
I guess at this point I'd vote to keep it as bemcs
because it's fine (not great, not bad), it's super easy to type, and it requires no additional work.
That sounds great!
I've pushed most of that the repo now. I wouldn't install until we have decided on a name because we want the environment name to match the library name.
Let me know what you think!
And bemcs is what it shall be known henceforth
One other note. If a list of dicts doesn't sound right, we could just do a single dict of numpy arrays. So we'd access the x_center
value for the 6th element as:
elements.x_centers[5]
This is what I did in Bem2d.jl (with a struct instead of a dict because Julia). We'd need some helper functions to append a new element nicely but no big deal. Thoughts?
That sounds exactly what I wanted to do with classes. So in this data structure if I write
elements.x_centers
I would get a 1-D numpy array?
Yep! The only thing that requires a little work is then we have to write little functions to add and delete elements by crawling across the fields in the dictionary, but that's totally reasonable. I think a list of dicts is the nicest in terms of ease of adding new elements, but a dictionary of numpy arrays is super easy to use for fast access to a lot of things. I'm good with either and we still have time to make that decision. What do you think?
I am certainly onboard this list of dictionaries concept. I don't know how to implement this, but I think it will be very little work to update most functions to this style.
What we have currently is a list of dicts. That means we write things like:
elements[0].x1
to get x1
from the first dictionary in the list elements
. Here, we'd have to use something like the get_field(field_name, elements)
function that I defined above to get all of the x1
values for all elements.
The dictionary of numpy arrays approach would allow us to write:
elements.x1[0]
to access the x1
from the first element. It would also allow us to get all the x1
elements by simply writing, elements.x1
. As I mentioned before, appending and deleting require a bit more work here but it's a very easy to use data structure that is great in practice and it's what I did in Bem2d.jl (albeit with structure rather than a dict).
I'm guessing you're leaning towards this latter case? I'm totally great with that!
Note: The canonical way to access dictionary fields in python is by writing elements["x1"][0]
. I respect that but using .
notation is so much easier. There are awesome libraries that enable this, and the one that I use is called addict. It's already listed as a bemcs
dependency in environment.yml
it should just work from that environment.
@brendanjmeade - i was thinking about this and realised there are a couple(+1) of coding tasks we need to do sooner than later:
Let me know which tasks you would like to do, and I can take the others
@mallickrishg
- Using addict to change how we store numpy arrays in lists, and updating the various functions that use those lists
This should be a high priority because it affects almost everything downstream and is different from what we are currently doing. I'll do this one. (@brendanjmeade)
To do this sensibly I suggest the following plan:
- The stress kernel and displacement kernel calculation is currently done in a hacky way. I translate and rotate the coordinates before sending them into the stress/dispalcement calculation function because that function needs flat faults centred at 0,0. After calculating stress/displacement, I unrotate these values back to the local coordinate system.
This seems sensible to me. That's how I did it in bem2d. See: https://github.com/brendanjmeade/bem2d/blob/4b35cdec6ac78ff1636fa1270a665f26e1fdc267/bem2d/quadratic.py#L5C5-L5C37. Thoughts?
- Also we need to deal with stress calculations at locations that lie exactly ON the fault plane. Currently I'm shifting each evaluation point by 1e-8 along the unit normal vector.
The same function that I shared above accepts an argument to do the on fault ("coincident" in BEM language) calculation. It calls out to a different kernel function: https://github.com/brendanjmeade/bem2d/blob/4b35cdec6ac78ff1636fa1270a665f26e1fdc267/bem2d/quadratic.py#L1104
Do you want to experiment with this one? (@mallickrishg)
- Also Also we need to monolithify bemcs
I'll do this as it should be done before the switch to a dictionary of lists (above) (@brendanjmeade)
@brendanjmeade This sounds like a great plan. So your tasks are -
Once you are done with those two tasks I can work on basically just taking what you had already implemented in bem2d and putting it into bemcs.
Let me know which 2 days are available to you in your schedule to work on this, and I will keep my paws off the repository.
@mallickrishg
I've carved out dedicated time to do this next Wednesday and Thursday. Work for you?
Yes, that will work. Thanks!
@mallickrishg
The new data structure is up and running!
elements
was the old data structure for keeping track of element geometry. It was a list of dictionaries with one for each element. We previously accessed information as, elements[idx]["x1"]
, and accessing multiple elements at once was a pain.els
is the new data structure for keeping track of element geometry. It is simply a dictionary of numpy arrays. We can now access information as, els.x1[idx]
and do all the fancy indexing we want. It's better!The three notebooks in the refactor
folder have all been updated to the els
data structure and all reliance on the old elements
data structure have been removed.
els
data structure to the refactor
folder and I'll get them updated. All notebooks that don't support the els
data structure will be deprecated and we'll remove them. Of course they'll still be in the repo history.els
data structure, I'll go through bemcs.py
and remove all code associated with the older elements
data structure.Let me know what you think and I'm happy to discuss this.
That's amazing Brendan! I will take a look at this over the next couple days.
In the meantime I was wondering how do you run the notebooks without adding a sys.path.append('../') # path to bemcs
line to your notebooks?
In the meantime I was wondering how do you run the notebooks without adding a sys.path.append('../') # path to bemcs line to your notebooks?
Just install it as a local package following the instructions in our README.md!
@brendanjmeade i just checked all 3 notebooks in refactor folder work for me. I like how now calling elements of els
is more intuitive (at least for me). Thank you so much for doing this.
However, I noticed that I was unable to use autocomplete to fill out names of attributes in els
, and it was a bit more involved to find out the names of all the attributes of els
than when we had a list and I could just double click it and pandas would open a table of entries. With els
i printed it out and then looked for the names of the lists. Is there some way to make it easier for the user to know what are all the attributes inside els
?
els.keys()
will print the field names.
Thanks for that.
2. Add any/all notebooks that we want to support the modern
els
data structure to therefactor
folder and I'll get them updated. All notebooks that don't support theels
data structure will be deprecated and we'll remove them. Of course they'll still be in the repo history.
Regarding notebooks with more functionality, we have 1 notebook which uses all the important functions in bemcs - https://github.com/brendanjmeade/bemcs/blob/main/examples/examples_3_intersectingfaults.ipynb If you could look it over, and we could discuss better ways to make matrix assembly possible. I feel like the matrix assembly could be made into a function but there needs to be some work on how to deal with various boundary conditions. Some boundaries are ones where we will impose time-invariant displacements or tractions while at others there can be time-dependent boundary conditions. I haven;t thought about how to do this much.
Before doing this, let me ask this question: Do you want to try with an object (dataclass) formulation first before we commit to the dictionary of numpy arrays. You really seem to like classes, and I feel like I've had the chance to make my case and you should have the chance to make yours. It's pretty easy to change my mind with a bit of nice code. Feel like trying it that way, at least for a small example?
I am happy to use dict of numpy arrays. It feels quite close to using data structures in matlab. You won't hear any more complaints from me now that I can access every attribute as an array :)
@mallickrishg
I've done the latest notebook conversion in the refactor folder. It can be found at:
Please give this a run when you can make sure it works at your end. If it does, then my next steps are:
notebooks
that becomes our home for almost all notebooks. The current notebooks in refactor
would move there. The other folders with notebooks would be deleted (refactor
, examples
, and exploratory
). All notebooks in those folders would be deleted too (although still accessible in the git history). Before I do this we should add the most up to date triple junction notebook to refactor
so that it can be updated as well. Also, add any other notebooks that you want updated.bemcs.py
. Delete any function that refers to the elements
data structure.point_source
folder for now. That's a mess as it should be right now.The goal of all of this is to help us transition to more organized and maintainable code. We're starting to get the basic pieces checked off so taking the time to consolidate a handful/bunch of good notebooks is a worthwhile endeavor.
Thanks Brendan, will look at the notebook tomorrow and get back to you.
I don't have any other notebook I can think of that needs the new els
update.
I do want to chat with you about choices on how we assemble the big linear operator matrix - if we want to create a function that does that or leave it as something that the user has to do based on examples we provide. Perhaps a chat on Thursday/Friday, or next week?
Which is the current triple junction notebook that I should translate?
I don't have a clear answer on the big linear operator. For now, I think assembling as we do works well. We still have so many conventions to work out and I think we need more practice to know what the right thing to do is. Down the road, we could certainly automate that. Thursday works well for a chat.
I think this was the 'simple' triple junction notebook we were working on https://github.com/brendanjmeade/bemcs/blob/main/examples/examples_3_simpleintersectingfaults.ipynb
I don't have a clear answer on the big linear operator. For now, I think assembling as we do works well. We still have so many conventions to work out and I think we need more practice to know what the right thing to do is. Down the road, we could certainly automate that. Thursday works well for a chat.
Thursday at 12noon LA (3pm Eastern)?
Note to self:
get_matrices_slip_slip_gradient
and get_design_matrix
look to be exactly the same. Go through all examples and convert to the former because it has a nice descriptive name (get_matrices_slip_slip_gradient
).
Fixed
Note to self: Push all
ux = kernels_s[3] @ quadratic_coefs_s + kernels_n[3] @ quadratic_coefs_n
uy = kernels_s[4] @ quadratic_coefs_s + kernels_n[4] @ quadratic_coefs_n
sxx = kernels_s[0] @ quadratic_coefs_s + kernels_n[0] @ quadratic_coefs_n
syy = kernels_s[1] @ quadratic_coefs_s + kernels_n[1] @ quadratic_coefs_n
sxy = kernels_s[2] @ quadratic_coefs_s + kernels_n[2] @ quadratic_coefs_n
to function. Pretty sure I've done this somewhere but role it out for all notebooks in the refactor
folder
I think you did - its right here
@brendanjmeade
I've done the latest notebook conversion in the refactor folder. It can be found at:
Please give this a run when you can make sure it works at your end. If it does, then my next steps are:
- Create a new root-level folder called
notebooks
that becomes our home for almost all notebooks. The current notebooks inrefactor
would move there. The other folders with notebooks would be deleted (refactor
,examples
, andexploratory
). All notebooks in those folders would be deleted too (although still accessible in the git history). Before I do this we should add the most up to date triple junction notebook torefactor
so that it can be updated as well. Also, add any other notebooks that you want updated.- Clean up
bemcs.py
. Delete any function that refers to theelements
data structure.- Keep the
point_source
folder for now. That's a mess as it should be right now.The goal of all of this is to help us transition to more organized and maintainable code. We're starting to get the basic pieces checked off so taking the time to consolidate a handful/bunch of good notebooks is a worthwhile endeavor.
Checked it. There were some errors because I was cleaning up the old traction kernel function (it used elements
), but it works fine now.
Also I am using black formatter, so I think whatever code I write now will be uniformly formatted as yours.
@brendanjmeade One of my tasks, which I had completely forgotten about, was to implement the coincident and far-field kernel calculations within the get_displacement_stress_kernel() function.
However, while I was trying to figure out how to do it I realized that the coincident kernels are only evaluated at 1 point because the inputs to it are quadratic_kernel_coincident(a, nu)!
The problem with this is that we want to compute the kernel on the element i.e., for a flat element, y = 0 but x will vary from -a/2 to a/2.
Does this mean we are okay to just ignore the coincident kernel calculation and just use the far-field kernel with an extremely small offset along the unit normal?
I think you are right that this is a real issue. In the Julia version version of this you can see the there are contributions from all three nodes but, as you point out, the function that calculates the $f$ terms does not depend on $x$. Ooof. Even worse, I didn't even leave a note stating where I'm actually evaluating these terms. My naive guess would be $x=0$ representing the center node? What we really need is this term evaluated at $y=0$ for each of the node coordinates. I don't think we have that.
Does this mean we are okay to just ignore the coincident kernel calculation and just use the far-field kernel with an extremely small offset along the unit normal?
My intuition says no...but intuition is no way to live. A simple experiment would be to compare the coincident and far-field kernels at $x=0$ as the far-field kernel is evaluated ever close to $y=0$. Does the far-field kernel converge to the same value as the exact coincident kernel?
@brendanjmeade I think the coincident kernel might be evaluated at the following 3 $x,y$ locations: (-0.67,0) , (0,0) , (0.67,0). See attached figure for comparison between coincident and far-field kernel (at distance of 1e-16 off fault). You can see the notebook here - https://github.com/brendanjmeade/bemcs/blob/main/examples/testing_coincident_vs_farfield.ipynb
I found $x = \pm 0.67$ by trial and error, maybe there is an actual number, can you recall? That said, the far-field kernel does a really good job of getting the coincident kernel values.
That feels like good news!
I found $x = \pm 0.67$ by trial and error, maybe there is an actual number, can you recall?
I can't recall but that sounds reasonable?. We can also check this by evaluating each of the three quadratic functions individually (coefficients: [1, 0, 0], [0, 1, 0], [0, 0, 1]) and then looking for the minimum or maximum of each parabola. One will obviously be at $x=0$ and the other two should have either minima or maxima at their node locations...maybe?
You were right! The zero crossing of the slip basis functions is the answer we were looking for. 2 of the bases have a zero crossing at $x = 0$, while the third one, $f_{\mathrm{ab}, 2}(x') = (1 - \frac{3x'}{2a}) (1 + \frac{3x'}{2a})$, has a zero crossing at $x = \pm 2/3 \approx 0.66667$!
The next thing we need to figure out is how to evaluate the coincident kernels at more than 3 points per node? Is there an appropriate interpolation we can use - it might just be quadratic, no?
That aside, the far-field kernel converges to the true solution. So that's great news! For a fault of half-length 1 unit, and stress evaluated $10^{-8}$ units away from the fault, the error in stress between coincident and far-field is $\sim 10^{-5}$ %! If the evaluation distance is down to $10^{-12}$ units away, stress errors are down to $\sim 10^{-9}$ %! This gives us an opportunity to be lazy and just stick with far-field kernels, no?
Another interesting feature - the far-field kernel and coincident kernel match for $\sigma{xy}$ i.e., I can evaluate the far-field kernel when the distance from the fault is 0! $\sigma{yy}$ is trivially 0, so that's boring. $\sigma_{xx}$ gets set to 0 in the far-field kernel when distance from fault is 0.
You were right! The zero crossing of the slip basis functions is the answer we were looking for. 2 of the bases have a zero crossing at $x = 0$, while the third one, $f_{\mathrm{ab}, 2}(x') = (1 - \frac{3x'}{2a}) (1 + \frac{3x'}{2a})$, has a zero crossing at $x = \pm 2/3 \approx 0.66667$!
Nice!
The next thing we need to figure out is how to evaluate the coincident kernels at more than 3 points per node? Is there an appropriate interpolation we can use - it might just be quadratic, no?
No idea but...
That aside, the far-field kernel converges to the true solution. So that's great news! For a fault of half-length 1 unit, and stress evaluated $10^{-8}$ units away from the fault, the error in stress between coincident and far-field is $\sim 10^{-5}$ %! If the evaluation distance is down to $10^{-12}$ units away, stress errors are down to $\sim 10^{-9}$ %! This gives us an opportunity to be lazy and just stick with far-field kernels, no?
...it feels like we have a pretty good back up plan
I just realized something interesting about the coincident stress kernels!
For purely shear slip on a horizontal interface:
This means that we might not need the coincident kernel at all!
I haven't checked what happens for purely tensile slip - will check and let you know tomorrow.
Here is a figure for slip in a shear sense showing that coincident kernels vs far-field kernel evaluated at distance = 0 from the fault for all 3 stress components. The quadratic source coefficient is [0,1,0].
In contrast, the kernels for tensile sources are already identical for all 3 components! Here is a similar figure, but for tensile slip for a quadratic source [0,1,0]
and similarly for [1,0,0]
@mallickrishg
I'm writing this to get us organized before this codebase gets too sprawling and we lose the ability to edit meaningfully. Standards that we should work towards:
snake_case
.SLIP_CONSTRAINT_TYPE
)bemcs
namespace (kernels
goes away) so that there are never conflicts with namespaces from other packages that we might import. The only downside is that bemcs.py becomes one large file. It's the after the year 2000, and computers should have no problem with this. Happy to consider arguments about this.bemcs
package name? It took me all of two seconds to choose this, with the rationale being that it was for "Boundary Element Model Continuous Slip". I'm happy to stick (because I don't feel like doing the work to change it!) with it, but names with descriptions like this.Thoughts and please add more below.