Closed rjfarmer closed 3 years ago
is the one zone burn worth continued support? if not, remove. on the other hand, there are a small number of instances in the literature of adopting the one zone burn as a post-processor for multi-d simulations, as for the current mesa-users thread.
For what it's worth I've been using the one zone burner for some of my science (as a way to do parameter explorations of rho/T space). I could get by without it, but it has been helpful.
-Adam
On Wed, Apr 07, 2021 at 12:45 PM, fxt < @.*** > wrote:
is the one zone burn worth continued support? if not, remove. on the other hand, there are a small number of instances in the literature of adopting the one zone burn as a post-processor for multi-d simulations.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub ( https://github.com/MESAHub/mesa/issues/209#issuecomment-815063552 ) , or unsubscribe ( https://github.com/notifications/unsubscribe-auth/ABPR5H4EC4V6TCJZBE6NOV3THSD3ZANCNFSM42QW5PEA ).
@adamjermyn do you have some inkling which matrix solver we should be using when the matrix gets large? I assume lapack would still work, but maybe it's not the optimal choice?
Without knowing anything about the structure of the matrix, I think it's hard to beat lapack/gesv. That's the current standard in numpy.
When you don't care as much about accuracy and have really large matrices (think sparse w/ 30k+ rows) other solvers like cg/bicg/bicgstab become preferable. I don't think we're ever in that limit though.
-Adam
On Thu, Apr 08, 2021 at 3:37 PM, Robert Farmer < @.*** > wrote:
@ adamjermyn ( https://github.com/adamjermyn ) do you have some inkling which matrix solver we should be using when the matrix gets large? I assume lapack would still work, but maybe it's not the optimal choice?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub ( https://github.com/MESAHub/mesa/issues/209#issuecomment-816105158 ) , or unsubscribe ( https://github.com/notifications/unsubscribe-auth/ABPR5HYXLT7YDSLA5COX3LTTHYAYDANCNFSM42QW5PEA ).
the matrix is extremely sparse, a bandwidth of 13, with several off-band elements. like this: torch127_jac.pdf or here http://cococubed.asu.edu/code_pages/net_torch.shtml
dense matrix techniques like lapack are a poor choice for such sparse matrices. if one must, https://ui.adsabs.harvard.edu/abs/1999ApJS..124..241T/abstract
if one wants a simple matrix solver that beats lapack by avoiding most operations with zero, but saves no memory by working with the matrix in a traditional a(i,j) format, then consider this warhorse:
subroutine leqs(a,b,n,np)
include 'implno.dek'
! ! solves a linear system of equations a x = b via gauss jordan elimination ! with no pivoting. a is destroyed on exit. on input b contains the ! right hand side, on exit it is the solution. ! ! declare integer n,np,n1,i,j,k,l,imax,jj double precision a(np,np),b(np),r,c
! for each column n1 = n-1 do i=1,n
! find maximum element in each row r = abs(a(i,1)) do j=2,n c = abs(a(i,j)) if (r.lt.c) r=c enddo
! divide that row and the right hand side by the maximum element do j=1,n a(i,j) = a(i,j)/r enddo b(i) = b(i)/r enddo
! for each column, do the elimination; bail if the element is zero do j=1,n1 l = j + 1 do i=l,n r = -a(i,j) if (r.eq.0.0) go to 50 r = r/a(j,j) do k=l,n a(i,k) = a(i,k) + ra(j,k) enddo b(i) = b(i) + rb(j) 50 continue enddo enddo
! the matrix is now in triangular form; back sub b(n) = b(n)/a(n,n) do l=1,n1 i = n-l r = 0.0d0 imax = i + 1 do j=imax,n jj = i + n + 1 - j r = r + a(i,jj)*b(jj) enddo b(i) = (b(i) - r) / a(i,i) enddo return end
The inlist_one_zone_burn* in net/test all set
But klu is not an option (anymore?) as we only have lapack and bcyclic_dble. So which one do we want for large nuclear networks? I presume what ever is not lapack otherwise we'd keep using that, as we do for the small nets?
Though i assume there are other bad options in those inlists that are not supported anymore.