mallickrishg / bemcs

Continuous slip boundary element models
MIT License
4 stars 0 forks source link

Try integrating Kelvin kernel #3

Open brendanjmeade opened 1 year ago

brendanjmeade commented 1 year ago
brendanjmeade commented 1 year ago

@mallickrishg Quadpy integration example shows lots of noise even with 50th order integration (see attached figure). Maybe try tanh-sinh. Maybe too singular?

Screenshot 2023-06-26 at 3 54 25 PM
brendanjmeade commented 1 year ago

Can we just integrate this analytically over a single triangle and then do a mapping from that triangle to a generic triangle or by breaking the triangle into regular shapes. Something like

$$ \int_0^1 \int_0^{1-x} f(x,y) dy dx $$

where $f(x, y)$ is one of the 5 kelvin kernels?

mallickrishg commented 1 year ago

That sounds like what Valere and Sylvain did for the antiplane bulk deformation source here -https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018GL078197 They integrated the point source over a rectangle, but in later papers Sylvain also did it for triangles.

brendanjmeade commented 1 year ago

I think that Barbot uses tanh-sinh integration (at least in his code) which isn't quite enough for that case (it's still hypersingular). I haven't done tanh-sinh since grad school so I'm rusty and the singularity only appears to be 1/r here. Have to try these things out!

mallickrishg commented 1 year ago

Numerical integration aside, do you think mathematica or maxima can handle these integrals analytically?

brendanjmeade commented 1 year ago

No idea. Simply have to try! Also, as tricky as they are it seems like the analytic integration of crazy things has been a bit of a way forward for this problem.

brendanjmeade commented 1 year ago

@mallickrishg An attempt at doing the symbolic integration of one of the Kelvin kernel over a triangle can be found at: https://github.com/brendanjmeade/bemcs/blob/main/kelvin_integrate_analytic_offset.m

It's not correct but it's interesting!

Screenshot 2023-06-27 at 11 32 23 AM
mallickrishg commented 1 year ago

Is is significant that the integral looks continuous on the bottom left of the triangle, but is discontinuous everywhere else?

brendanjmeade commented 1 year ago

I think it means that I don't know how to properly integrate over a triangle!

brendanjmeade commented 1 year ago

Currently integrates something that looks like a triangle but I'm not confident of this. The reasons that I'm not confident include:

  1. This integration approach only looks reasonable for the case where the triangle expands away from the origin. This appears unique to the kelvin kernel because I can integrate a simple 1/r like kernel over any triangle.
  2. I have to break up the integration into steps (indefinite y, definite y, indefinite x, definite x). Stringing these all together in Matlab returns a result that still has integration terms in it.
integrated_kelvin_kernel

I think there are three things to do here:

  1. Explore if far-field analytic integration looks like far-field numeric integration.
  2. Check the source of Kelvin kernels to make sure that they are accurately implemented. Butterfield and Bannerjee? Crouch and Starfield?
  3. Try integrating with Mathematica.
brendanjmeade commented 1 year ago

Kelvin kernels that I've been using from Crouch and Starfield: kelvin_point_source_solution_crouch_and_starfield

mallickrishg commented 1 year ago

@brendanjmeade - I apologize if i keep going on like a broken record so feel free to ignore this post, but it really does look like Sylvain's 2017 BSSA paper had them using Mathematica to integrate the point source solution to Kelvin's problem. Below is a screenshot of the plane strain kernels that they integrate over a rectangle to get the displacement field. That looks similar to what you posted just above.

How they get the stress field is a little bit odd - outside the strain source, it is just spatial derivatives of the displacement field. Inside the rectangular source, they still take the spatial derivative of displacement but there is some justification for why they need to subtract out $2\mu\epsilon_{ij}^v$ only within the rectangle source. That said, it seems like they have actually solved the Kelvin problem. In fact, it was this sort of formulation that allowed us to do our power-law viscoelasticity JGR paper last year (although that was in antiplane strain).

Screen Shot 2023-07-06 at 10 14 35 AM
brendanjmeade commented 1 year ago

@mallickrishg That's fantastic to see that they got this analytically! One can have a cogent discussion about different choices one might make with the interior but the far-field (and adjacent) is what is needed for a BEM solution. I'll try it out tomorrow. Avoiding numerical integration seems really good for this problem. Also my own attempts have definitely not gone well. The attached figure shows a comparison between the far-field analytic (as I tried it) and far-field numeric (summing point sources). Obviously, they look nothing like each other.
‎analytic_far_field

brendanjmeade commented 1 year ago

@mallickrishg The Barbot (2017) analytic solution that you shared appears to be the volume response to a point source. This is what the Kelvin solution is intended to be. I've now compared this with the version of the Kelvin solution given in Crouch and Starfield (1983). The solutions are pretty similar qualitatively, but there are qualitative differences. You can find this in the notebook: https://github.com/brendanjmeade/bemcs/blob/main/point_source_experiments.ipynb

I think more work needs to be done to have confidence in the point source solution to this problem since we have two sources that seem to disagree with one another. The full vector solutions are available in many textbooks and papers, so I can look at those.

My current perspective is to put this on the back burner. There is so much in front of us with what we have that I think it's better to focus on the non-body force case for the time being. Once we have that foundation with a solid library, a theory paper, and are moving forward on applications I'd like to return to the body force problem. Thoughts?

mallickrishg commented 1 year ago

It's weird that Barbot 2017 and the C&S 1983 point sources actually look different! I have actually used the Barbot2017 analytically integrated sources to do simple plane strain viscoelastic relaxation tests and I found that there is a really annoying problem with the full stress interaction kernel, even when I used uniform sized squares to mesh the domain. The deviatoric stress kernel (that is appropriate for viscous flow) has positive eigen values for most cases - this implies that by using this kernel in any ODE solver we can never converge to a steady state solution since the positive eigen values $(\lambda+)$ will lead to a blow up of the form $exp(\lambda+ t)$. Thankfully, this issue did not occur when we did our 2-d antiplane viscoelastic simulations. So even if we used Barbot 2017 volumetric greens functions, I just wanted to let you know that there is this issue. I fully support your suggestion to put this on the back burner for now, and revisit it once we have built BEMcs.

brendanjmeade commented 1 year ago

I think real progress here will be going back to the basic sources, writing out the vector equations, carefully converting them to Cartesian and then trying to integrate analytically. Great project for another time!

mallickrishg commented 1 year ago

I know this is on the back burner, but I think I realized the reason barbot et al and C&S kernels are different. Barbot kernels are for a half space while C&S are full space. Just putting this point here for when we revisit this issue.

brendanjmeade commented 1 year ago

That could explain it. I'm not sure because I only coded up the $G_{22} = u_x$ displacements from the Barbot paper. I don't think they give the stresses like C&S. We want the full space anyway for the BEM solution. Thanks for bringing up this idea!

brendanjmeade commented 1 year ago

@mallickrishg

Following up on your email from the other day, I wanted to get concrete about the numerical integration of Kelvin sources. This is implemented in a Matlab script that does both analytic and numerical integration. The central idea is to integrate a Kelvin point source along a line over the interval -1 to 1. This is a more simple case than the integration over a triangle and, thus, probably a better place to start. For this problem, I only do the solutions for $ux$ (ux) and $\sigma{xx}$ (sxx). integration of the Kelvin kernel is done in three ways:

  1. Analytic. Remarkably this integrated analytically with the Matlab symbolic toolbox.
  2. Uniform spaced equal weights numerical integration. 10 equally spaced nodes with equal weights.
  3. Gauss-Legendre numerical integration. 10 nodes and weights taken from: https://pomax.github.io/bezierinfo/legendre-gauss.html#n10

The figures below show the integration results and residuals (analytic vs. numeric). The residuals are shown as %error. The far-field solutions look similar. On the other hand, the near-field agreement is horrible, as expected. Residuals are quite large...very, very large...like over 500% for the stresses. The residuals are smaller for the displacements (as they should be because the singularity is weaker), but both are poor. Gauss-Legendre integration is a bit better than uniform numerical integration, but it's still in the "doesn't work" category.

x diplacements ($u_x$, ux)

kelvin_line_ux

xx stress component ($\sigma_{xx}$, sxx)

kelvin_line_sxx
mallickrishg commented 1 year ago

Thanks for doing this Brendan. I am quite surprised by some of the results over the past couple days

  1. These results where you show GL quadrature does not show good agreement with the analytical integration! Perhaps on a related note, I notice that the units or amplitude are really different between the different integration techniques (even in the far-field) and so I'm not sure we can interpret the absolute values of the residuals.
  2. The discrepancy in $\sigma{xy}$ for constant, linear and quadratic elements. I was really surprised that the constant slip elements outperform linear elements. However, after thinking about this it may be that the discrepancy might be explained by the following: linear elements will have a constant error in approximating a quadratic polynomial, while the error in approximating a quadratic with constant elements depends on the local curvature of slip. The quadratic polynomial in that notebook is flat near the center and has largest gradients at the lateral edges. Maybe that's the explanation? Note that the residuals plotted below are just $\sigma{ij}$(quadratic) $- \sigma_{ij}$(constant or linear) with $\mu = 1, \nu = 0.25$. I have not taken $\log$ of any quantities. Screen Shot 2023-07-20 at 9 39 08 AM
brendanjmeade commented 1 year ago

These results where you show GL quadrature does not show good agreement with the analytical integration! Perhaps on a related note, I notice that the units or amplitude are really different between the different integration techniques (even in the far-field) and so I'm not sure we can interpret the absolute values of the residuals.

I definitely hand tune normalize the integrals a bit to get them close. There are proper weightings but I haven't implemented them here. That is a thing that I should correct. The important thing is that this is documentation that Gaussian integration simply performs horribly for the elasticity type problems. That's why there's been 40+ years of trying to do better in the applied math community. The earth science community seems not to have noticed that these calculations are so incorrect. That's why I think what we're working on is so interesting and important. Singularity free means that first numerically/mechanically/algebraically meaningful models

brendanjmeade commented 1 year ago

Regarding 2. that's a super interesting explanation. Very clever. Some addition experiments with different quadratic models might provide additional "data" for this discussion. Smooth near at least one edge would be a good one explore.

mallickrishg commented 1 year ago

I did what you suggested, and I think it is quite clear that linear slip elements outperform constant slip elements (for $\sigma_{xy}$) when the curvature is larger than some threshold value. Below are 2 cases where the cruvature in the quadratic is at opposite ends of the domain and you can see that is where the linear elements do better than constant slip or rather the constant slip elements do poorly. Screen Shot 2023-07-20 at 12 00 32 PM Screen Shot 2023-07-20 at 12 00 49 PM Screen Shot 2023-07-20 at 12 01 25 PM Screen Shot 2023-07-20 at 12 01 39 PM

mallickrishg commented 1 year ago

Another reason for the discrepancy I think comes from how the slip values are approximated for linear and constant slip elements. You parameterized slip for the constant slip elements using potency conservation while an alternate approach would be to simply evaluate the quadratic at the element center. I think if we did it the other way, we would notice a much larger difference between constant and linear elements. I personally think your implementation is more sensible, but I am looking for explanations for this otherwise non-intuitive result - why are constant slip elements approximating $\sigma_{xy}$ so well?

In the case currently implemented potency for a given patch using constant slip or linear slip is $0.5(x_1-x_2)((ax_1^2 + bx_1 + c)-(ax_2^2 + bx_2 + c))$ while in the other approach potency from the linear slip elements would be unchanged but the constant slip potency would be $(x_2-x_1)(a((x_1+x_2)/2)^2 + b((x_1+x_2)/2) + c)$.

brendanjmeade commented 1 year ago

Another reason for the discrepancy I think comes from how the slip values are approximated for linear and constant slip elements. You parameterized slip for the constant slip elements using potency conservation while an alternate approach would be to simply evaluate the quadratic at the element center.

I believe my intent was actually to evaluate the quadratic element center and if I did something else it might be by mistake?

I think if we did it the other way, we would notice a much larger difference between constant and linear elements. I personally think your implementation is more sensible, but I am looking for explanations for this otherwise non-intuitive result - why are constant slip elements approximating $\sigma_{xy}$ so well?

This is certainly interesting and unexpected. I'm wondering if it'x curvature or that it happens to be at the edge of an element where slip stop suddenly? I feel like there are a ton of experiments that we could do to provide insight here. The 0th order outcome is that both are in errors because of slip discontinuities at element edges, but there's something first order that we're missing in terms of an explanation for constant vs. linear. Perhaps the result would be different, it was the average shear stress over an element? I feel like I'm grasping for an explanation.

In the case currently implemented potency for a given patch using constant slip or linear slip is $0.5(x_1-x_2)((ax_1^2 + bx_1 + c)-(ax_2^2 + bx_2 + c))$ while in the other approach potency from the linear slip elements would be unchanged but the constant slip potency would be $(x_2-x_1)(a((x_1+x_2)/2)^2 + b((x_1+x_2)/2) + c)$.

Is this what you did above? I really thought I had just done the slip at the centroids!

mallickrishg commented 1 year ago

The current implementation is constant_y[i] = 0.5 * (y_quadratic_low_resolution[i] + y_quadratic_low_resolution[i + 1]) while the other version of this could have been constant_y[i] = evaluate_quadratic(0.5*(x[i]+x[i+1]))

I think if we did it the other way, we would notice a much larger difference between constant and linear elements. I personally think your implementation is more sensible, but I am looking for explanations for this otherwise non-intuitive result - why are constant slip elements approximating σxy so well?

This is certainly interesting and unexpected. I'm wondering if it'x curvature or that it happens to be at the edge of an element where slip stop suddenly? I feel like there are a ton of experiments that we could do to provide insight here. The 0th order outcome is that both are in errors because of slip discontinuities at element edges, but there's something first order that we're missing in terms of an explanation for constant vs. linear. Perhaps the result would be different, it was the average shear stress over an element? I feel like I'm grasping for an explanation.

Same here! A clear explanation is just beyond reach. But it is so fascinating that this behaviour exists.

mallickrishg commented 1 year ago

The current implementation is constant_y[i] = 0.5 * (y_quadratic_low_resolution[i] + y_quadratic_low_resolution[i + 1]) while the other version of this could have been constant_y[i] = evaluate_quadratic(0.5*(x[i]+x[i+1]))

Never mind! This is not the issue. The difference between the 2 methods is in the third decimal place - totally insignificant. The more fundamental problem still remains - it has something to do with gradients or curvature of slip.

mallickrishg commented 1 year ago

@brendanjmeade I finally spent the time learning how to do Gauss-Legendre integration and it seems like using higher order weights approximates the solution, at least for a line source, extremely well! Even with 15th order integration, the resiuduals are under 5% even at the source!

Screen Shot 2023-07-22 at 12 45 54 PM Screen Shot 2023-07-22 at 12 46 15 PM
mallickrishg commented 1 year ago

you can access the script to do those calculations here - https://github.com/brendanjmeade/bemcs/blob/main/point_source/kelvin_integrate_analytic_offset_line_RM.m

brendanjmeade commented 1 year ago

Thanks for developing this! This is an awesome experiment, and I think it highlights how good a GL-style solution is for the far field. The challenge here, as with all singular integrals that we need for BEM problems, is not the far-field, but the coincident (on-fault part). The reason for this is that we need to evaluate the displacements and tractions (stresses) at the boundaries. The thing to plot here stresses and residuals exactly on the fault. That's where the critical unresolvable errors will be seen most clearly.

I just tried to do this but for reasons I don't understand Matlab is failing to compute these integrals even in the old notebooks...whaaa.

Screenshot 2023-07-22 at 10 31 50 PM
mallickrishg commented 1 year ago

Thanks for developing this! This is an awesome experiment, and I think it highlights how good a GL-style solution is for the far field. The challenge here, as with all singular integrals that we need for BEM problems, is not the far-field, but the coincident (on-fault part). The reason for this is that we need to evaluate the displacements and tractions (stresses) at the boundaries. The thing to plot here stresses and residuals exactly on the fault. That's where the critical unresolvable errors will be seen most clearly.

Thank you so much for patiently explaining these things to me. I have heard you say this multiple times, but it didn't really sink in until I actually implemented it for myself. Now I really get it!

Below are plots of $u_x$ evaluated exactly on the fault for $N=11$ and $N=39$, and I can see the residuals shrinking as $N$ increases but its always there! By the way this line integrated source is essentially a constant slip element right? Or is it a constant traction element?

Screen Shot 2023-07-23 at 12 18 16 PM

I just tried to do this but for reasons I don't understand Matlab is failing to compute these integrals even in the old notebooks...whaaa.

Regardingint(ux,x0,[-1 1]), it never actually returned an expression for me but I could evluate the expression just fine. With indefinite integrals I know we can get an actual expression, but for this case the definite integrals don't seem write out a symbolic expression.

mallickrishg commented 1 year ago

I tried integrating analytically $\sigma_{xx}$, and MATLAB claimed it was able to do it. However, when I evaluate stress ON the line source (-1<=x<=1, y = 0) it just results in large noisy results. My takeaway is we either need mathematica to do this problem OR just never solve for stress directly. Perhaps we should only solve for displacements and then use some high order finite difference or something like that to compute displacement gradients and stress?

brendanjmeade commented 1 year ago

Below are plots of $u_x$ evaluated exactly on the fault for $N=11$ and $N=39$, and I can see the residuals shrinking as $N$ increases but its always there!

This is one of the most informative figures I've ever seen! So good and makes the point. I wish numerics people would do stuff like these but they rarely do.

By the way this line integrated source is essentially a constant slip element right? Or is it a constant traction element?

This point source is a force, in this case in the x direction. The solution was initially stated by Kelvin (c. 1848?) then statement more clearly in Love's elasticity book (1920s-1940s editions) and finally written out nicely in Cartesian coordinates by Crouch and Starfield (1983). This is an important kernel because if it can be integrated over a triangle then we do arbitrary body forces, which means that any non-linear problem (e.g., viscoelasticity) can be solved via effective body forces. That's the theory at least! For problems like plasticity there is still the loss of convexity that is unclear how to deal but there's a long road before we get there.

I tried integrating analytically , and MATLAB claimed it was able to do it. However, when I evaluate stress ON the line source (-1<=x<=1, y = 0) it just results in large noisy results.

I think I know what's going on here. I think the expression is integrated correctly. However, it still has artificial singularities and zeros at $y=0$. These are very likely not real can be eliminated by algebraic simplification. I had the exact same problem with the coincident (on-fault) solution for the quadratic slip kernels. I got the integrals with Maxima and then had to do tons of simplification and zero hunting with regular expressions to eliminate terms that are artificial. Believe it or not, as hacky as this sounds, it's got a real mathematical name Hadamard regularization. This was very challenging to do for the quadratic kernels but it eventually. See below for a followup comment.

My takeaway is we either need mathematica to do this problem OR just never solve for stress directly. Perhaps we should only solve for displacements and then use some high order finite difference or something like that to compute displacement gradients and stress?

As I described above, this problem is solvable and I've done something similar before. I'm hoping Mathematica is able to give simplified expressions more easily because it seems to be considered the most powerful symbolic engine. I'll have to get busy learning it.

One of the things that's great about boundary elements is that it's a genuinely interesting problem. People think it's trivial because it's the 2023 and we're talking about a problem in linear elasticity but it involves a vast array of interesting mathematics and philosophical ideas. It's hard and so worth working on!

mallickrishg commented 1 year ago

Thanks for the great explanations Brendan. Good to know this problem is solvable!

When i have time I might end up attempting tanh-sinh quadrature for this integral to see how much better it performs compared to GL. Oh, also need to figure out how to integrate over a 2-d shape as opposed to a line.

brendanjmeade commented 1 year ago

Thanks for the great explanations Brendan. Good to know this problem is solvable!

I'm not sure that it's solvable but I've seen something similar before and my bet is that progress can be made.

When i have time I might end up attempting tanh-sinh quadrature for this integral to see how much better it performs compared to GL. Oh, also need to figure out how to integrate over a 2-d shape as opposed to a line.

I haven't tried tanh-sinh since undergrad...and I definitely didn't understand it then. I never got how to get the value near the expansion point. For now, I'm pinning my hopes on Mathematica!

mallickrishg commented 1 year ago

tanh-sinh is not very satisfying. the wikipedia page had statements from papers like "The Tanh-Sinh quadrature scheme is the fastest currently known high-precision quadrature scheme, particularly when one counts the time for computing abscissas and weights. It has been successfully employed for quadrature calculations of up to 20,000-digit precision."

However I found that for the same number of evaluations as gauss-legendre, it gives similar results. I'm pinning my hopes on you pinning your hopes on mathematica.

Screen Shot 2023-07-24 at 3 47 50 PM
brendanjmeade commented 1 year ago

Nice experiment! I'm betting that none of these are going to work.

In related news, I spent the afternoon working on doing the line integrals with Mathematica. The good news is that I now know how to do some symbolic integration with Mathematica. The bad news is that it's no better, and seemingly worse, than Matlab. When I try to integrate over the interval [-1, 1] it computes a nice and relatively small integral...that is only valid outside of [-1, 1]. I've tried evaluating it numerically and it's divergent inside this range. Strangely, it can't compute integral the $ux$ case but it can for the $\sigma{xx}$ case.

If we really want to see if the Kelvin kernel can be integrated then, my current bet, is that working with the kernels in Matlab (or maybe maxima) and then doing the finite part calculation is the way to go. This is tricky on its own, but there's a secret hacky way to figure out which terms to regularize (eliminate). That trick is to compare the solutions (as we regularize individual terms) with numeric solutions. The numeric ones will of course have the near singular spikes but the longer wavelength shape will provide good guidance.

It would be super cool if we could get this analytically but I'm going to prioritize the non-body force stuff for a bit.

mallickrishg commented 1 year ago

In related news, I spent the afternoon working on doing the line integrals with Mathematica. The good news is that I now know how to do some symbolic integration with Mathematica. The bad news is that it's no better, and seemingly worse, than Matlab. When I try to integrate over the interval [-1, 1] it computes a nice and relatively small integral...that is only valid outside of [-1, 1]. I've tried evaluating it numerically and it's divergent inside this range.

So weird! These math problems behave so weird sometimes. I found that matlab has a function called integral and integral2 designed to handle singular integrals with adaptive quadrature. It is surprisingly good! I need to figure out how to do this in 2-d now. Every day with these kelvin kernels is like a see-saw with my emotions. I keep wanting to abandon it, but I keep getting pulled back with these oases of hope.

Screen Shot 2023-07-26 at 2 29 43 PM
mallickrishg commented 1 year ago

Great news! Looks like we can use matlab to numerically integrate even 2-d Kelvin sources! I feel like patting ourselves on the back!

Attached below is a figure of displacement fields for Kelvin sources that are rectangular. The source strength is normalized by the area of the source. The sources have a fixed dimension in $x$, but their $y$ dimension varies from small to big. For the smallest source, we basically recover our line source solutions!

Screen Shot 2023-07-26 at 3 19 12 PM

mallickrishg commented 1 year ago

I am also just now realizing that the displacement fields do not decay away to 0 far away from the source, unlike solutions for dislocation sources. What does that mean? Do we need to add a restoring force somewhere in the domain to balance this finite source of force?

Never mind, I just read the chapter that talks about this in crouch & starfeld

brendanjmeade commented 1 year ago

10/10 did not expect to see this. I'm totally amazed. I've read the paper with this algorithm and I totally don't understand it. However, I really like the way the paper is written. It's not a typical numerics proof of convergence, rather it's a description of a sort of hard learned engineered solution.

Can I ask a quick question: For the line integral case, can you plot the residuals for just the adaptive quadrature case (dashed green line). I want to see what the magnitudes of these residuals are. 1e-1, 1e-4, machine precision? Are there still spikes at all?

Wow simply wow!

brendanjmeade commented 1 year ago

I am also just now realizing that the displacement fields do not decay away to 0 far away from the source, unlike solutions for dislocation sources. What does that mean? Do we need to add a restoring force somewhere in the domain to balance this finite source of force?

Never mind, I just read the chapter that talks about this in crouch & starfeld

Totally strange! As others have noted, it's been argued to be associated with the idea that the point source is actually an infinitely long (out of the plane) source and it not something to correct for. Honestly, I have no serious physical understanding of this. It's a peculiarity of the 2D Kelvin solution that doesn't exist in 3D.

mallickrishg commented 1 year ago

Can I ask a quick question: For the line integral case, can you plot the residuals for just the adaptive quadrature case (dashed green line). I want to see what the magnitudes of these residuals are. 1e-1, 1e-4, machine precision? Are there still spikes at all?

Errors are at machine precision level

Screen Shot 2023-07-26 at 7 49 46 PM
brendanjmeade commented 1 year ago

Leaving some notes:

mallickrishg commented 1 year ago

It does quite terribly when I tried to integrate $\sigma_{xx}$. I got a bunch of warning messages that integral2 had exceeded the default error tolerance, and I should increase the number of permitted iterations.

mallickrishg commented 1 year ago

you can see how the errors in $\sigma_{xy}$ start getting really bad as we approach the evaluation exactly on the line source itself. Looks like $1/r$ needs to be done by mathematica or by hand.

Screen Shot 2023-07-26 at 8 32 09 PM Screen Shot 2023-07-26 at 8 32 31 PM Screen Shot 2023-07-26 at 8 32 51 PM
brendanjmeade commented 1 year ago

That makes sense. For this non-hyper singular case, there are a huge array of numerical methods that we could try from the BEM world, but the problem is that there are a huge array of methods. There are so many, and they are usually hard to implement in a general way. I'll think more.

mallickrishg commented 1 year ago

Our MATLAB results are identical to the C&S results, with a constant offset in $u_x,u_y$

Screen Shot 2023-07-27 at 2 45 27 PM
mallickrishg commented 1 year ago

I also sat and fiddled with Mathematica and I get the same result for $U_x$ at least. The learning curve with mathematica is super steep. I barely understand any of the functions. Screen Shot 2023-07-27 at 5 42 17 PM

brendanjmeade commented 1 year ago

So nice! This is deeper Mathematica than I have. Where/How did you learn this type of subsititution and notation? Can you get stresses? If so, it might be worth trying the triangle integration?