SPECFEM / specfem3d

SPECFEM3D_Cartesian simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra (structured & unstructured).
https://specfem.org
GNU General Public License v3.0
411 stars 229 forks source link

Possible critical bug in the xsmooth_sem utility #621

Closed EtienneBachmann closed 7 years ago

EtienneBachmann commented 9 years ago

Hello everybody, I am currently working on translating this routine for the 2D version, and I might have found a bug in this 3D version, I think that the valence of a GLL point is not taken into account for this smoothing process. Looking at the code :

! loop over elements to be smoothed in the current slice do ispec = 1, nspec_AB ! --- only double loop over the elements in the search radius --- do ispec2 = 1, nspec_N

    ! calculates horizontal and vertical distance between two element centers
    call get_distance_vec(dist_h,dist_v,cx0(ispec),cy0(ispec),cz0(ispec),&
                      cx(ispec2),cy(ispec2),cz(ispec2))

    ! checks distance between centers of elements
    if (dist_h > sigma_h3 .or. dist_v > sigma_v3) cycle

    ! loop over GLL points of the elements in current slice (ispec)
    do k = 1, NGLLZ
      do j = 1, NGLLY
        do i = 1, NGLLX

.............. .............. ! adds contribution of element ispec2 to smoothed kernel values tk(i,j,k,ispec) = tk(i,j,k,ispec) + sum(exp_val(:,:,:) * dat(:,:,:,ispec2))

          ! normalization, integrated values of gaussian smoothing function
          bk(i,j,k,ispec) = bk(i,j,k,ispec) + sum(exp_val(:,:,:))
        enddo
      enddo
    enddo ! i,j,k
  enddo ! ispec2
enddo ! ispec

then :

dat_smooth(i,j,k,ispec) = tk(i,j,k,ispec) / bk(i,j,k,ispec)

We can see that tk(i,j,k,ispec) receives a contribution of each spectral element of the second loop ( do ispec2 = 1, nspec_N). Proceeding like this, we don t take into account the fact that some points, located at the border of several spectral elements are counted twice, three times or more. It is also important to take in account that for such a GLL point, located in (i,j,k,ispec) and (i',j',k',ispec'), the data " dat(i,j,k,ispec) " and " dat(i',j',k',ispec') " are exactly the same value, and thus does not represent a different contribution coming from each spectral element. Even if a normalization is done ( dat_smooth(i,j,k,ispec) = tk(i,j,k,ispec) / bk(i,j,k,ispec) ) , it does not correct the problem. It is like doing y = (a+2b+c)/4 meanwhile you want y = (a+b+c)/3.

I hope my explication makes sense, don t hesitate to ask me if not, in any case I d be glad to have another point of view. Thanks a lot,

Etienne

danielpeter commented 9 years ago

hi Etienne,

although point locations can be shared among different elements at the borders, the values dat(i,j,k,ispec) and dat(i',j',k',ispec') can be different. this is the case for all material properties at some internal discontinuities, when the internal surface is honored by the element boundaries (the GLL basis allows to map piecewise-smooth functions).

the smoothing here does: y = [ (w1 * a + w2 * b) + (w2 * c + w3 * d) ] / (w1 + w2 + w2 + w3)

where (a,b) and (c,d) are e.g. material values in element 1 and element 2 respectively. the weights w1,..w3 are determined by a gaussian function, based on the distance to the reference (GLL) point location. in this example, the points b and c share the same location, thus the weight assigned is the same.

the question with this sort of smoothing is rather if (1) the weights should be calculated based on point distances only or if (2) the associated volume to a single GLL point should be taken into account as well. in the latter, the smoothing would take the volume integral over the gaussian and lead to a different result. for now, the code implements option (1), but you're are free to experiment with the second option and contribute it here.

best wishes, daniel

EtienneBachmann commented 9 years ago

Hi Daniel, Thanks a lot for your answer. You are right, this utility can be used for different purposes, such as smoothing an initial model or computed kernels. Nevertheless, don t you think that the current smoothing : y = [ (w1 * a + w2 * b) + (w2 * c + w3 * d) ] / (w1 + w2 + w2 + w3) should be : y = [ (w1 * a + (w2 /2)b) + ((w2/2) * c + w3 \ d) ] / (w1 + w2 + w3)

With this formula, we take into account the valence of a GLL point. Outside of the specfem code, there is no mass matrix to compensate the "overweighting" over the GLL points situated on a border of an element. If not so, I don t understand why the GLL point with the associated weight w2 should matters twice more than the other GLL points in the final smoothing for the y coefficient.

komatits commented 9 years ago

Hi Etienne and Daniel, Hi all,

As mentioned by Etienne in another email, another (probably more problematic) issue is the fact that the current smoothing does not seem to extend to non-adjacent MPI partitions, even though in some cases it should (typically when the size of the Gaussian smoothing cap is big enough to extend over several partitions, some of which might be non-adjacent).

Solving this, as Etienne says, will probably need to involve some (costly?) all-to-all MPI communications.

I am not very familiar with that part of the code because here in our group with Sébastien, Vadim and Yi (cc'ed) we do the smoothing in a different way for now (we project to a regular tomographic grid and then implement a classical finite-difference Laplacian stencil for smoothing, see e.g. http://komatitsch.free.fr/preprints/GJI2_Vadim_2015.pdf ).

Best regards, Dimitri.

On 09/16/2015 06:18 PM, EtienneBachmann wrote:

Hi Daniel, Thanks a lot for your answer. You are right, this utility can be used for different purposes, such as smoothing an initial model or computed kernels. Nevertheless, don t you think that the current smoothing : y = [ (w1 * a + w2 * b) + (w2 * c + w3 * d) ] / (w1 + w2 + w2 + w3) should be : y = [ (w1 * a + (w2 /2)b) + ((w2/2) * c + w3 \ d) ] / (w1 + w2 + w3)

With this formula, we take into account the valence of a GLL point. Outside of the specfem code, there is no mass matrix to compensate the "overweighting" over the GLL points situated on a border of an element. If not so, I don t understand why the GLL point with the associated weight w2 should matters twice more than the other GLL points in the final smoothing for the y coefficient.

— Reply to this email directly or view it on GitHub https://github.com/geodynamics/specfem3d/issues/621#issuecomment-140792356.

Dimitri Komatitsch CNRS Research Director (DR CNRS), Laboratory of Mechanics and Acoustics, UPR 7051, Marseille, France http://komatitsch.free.fr

danielpeter commented 9 years ago

Hi Etienne and Dimitri,

for discontinuous functions in general, the smoothing would integrate over piecewise contributions and divide by the total volume. this is what the original routine is trying to do, although it only approximates the total volume.

obviously, it depends on the partitioning if this case could occur. for the in-house meshing partitioning, i.e. the french-fries partitioning scheme, the neighbours are likely to cover most of the smoothing area (unless you choose a very large smoothing length compared to the size of a single partition). for cases where we use SCOTCH to partition the mesh, the slices can become more irregular and this might occur.

if this is a problem, we can extend the smoothing routine to look for all slices which are within a gaussian width length away and include them for the smoothing procedure... just let me know if this is what you would need.

best wishes, daniel

EtienneBachmann commented 9 years ago

Hi,

For the valence, I think that the current smoothing already transforms a discontinuous function into a continuous function. All the points (i,j,k,ispec) associated to the same iglob are smoothed using the same coefficient and the same data.

For the MPI partition "bug", the main problem isn't the MPI communication, but the double loop over spectral elements that becomes really big. I imagine that in a very large simulation this routine can become computationnally very expensive. To make the parallel with the 2D tools, I am working on a version building the valence array of the mesh, and that would run on GPU. In 2D for quite large meshes, we have Tsmooth >> Tspecfem. Thus, a solution for the 3D case could be to run it also on GPU, if the computation time is, or becomes really important.

komatits commented 9 years ago

Hi Etienne,

It is true that the double loop on all the spectral elements of the mesh (the whole mesh, not only the local MPI slice) will be expensive. What we often do in that case is add something like:

if (distance is too big) then cycle endif

to quickly exclude elements that are too far. However computing the distance is expensive as well, but as we discussed last year you can remove the sqrt() calculation from the distance comparison in order to (slightly, 30% or so) speed up the comparison).

Best, Dimitri.

On 10/07/2015 04:19 AM, EtienneBachmann wrote:

Hi,

For the valence, I think that the current smoothing already transforms a discontinuous function into a continuous function. All the points (i,j,k,ispec) associated to the same iglob are smoothed using the same coefficient and the same data.

For the MPI partition "bug", the main problem isn't the MPI communication, but the double loop over spectral elements that becomes really big. I imagine that in a very large simulation this routine can become computationnally very expensive. To make the parallel with the 2D tools, I am working on a version building the valence array of the mesh, and that would run on GPU. In 2D for quite large meshes, we have Tsmooth >> Tspecfem. Thus, a solution for the 3D case could be to run it also on GPU, if the computation time is, or becomes really important.

— Reply to this email directly or view it on GitHub https://github.com/geodynamics/specfem3d/issues/621#issuecomment-146060139.

Dimitri Komatitsch CNRS Research Director (DR CNRS), Laboratory of Mechanics and Acoustics, UPR 7051, Marseille, France http://komatitsch.free.fr

EtienneBachmann commented 9 years ago

Hi Dimitri,

This criterion is already used in both 2D and 3D versions, to avoid a loop over GLL points. Nevertheless, a double loop over spectral elements of all the domain ( not only the current MPI slice) remains. In the 2D case, I already removed the sqrt() criterion, but it is not enough. In a test, an adjoint simulation in specfem2D ran in 30s, meanwhile the smoothing was more than 30min. As long as there are significantly more GLL points in a spectral element in 3D ( leading to a higher efficiency of the distance criterion), the current fortran version might be sufficient in 3D.

What I suggest for the 3D case is to remove the sqrt(), and looping over all MPI slices instead of the neighbours only. If some users complain about the new computation time, a GPU solution will be available, by adapting a routine i ll post in the next weeks for the 2D case. The only remaining point, to my opinion, is the overweighting of high valency points that should be corrected, or, at least, suggested as an option.

Best Etienne

komatits commented 8 years ago

Hi Etienne @EtienneBachmann ,

I agree that we should improve that. Let me cc Vadim @vmont , Yi and Sébastien, who are currently also working on better ways of doing the smoothing. Let me also cc Daniel @danielpeter .

Thanks, Dimitri.

On 10/07/2015 05:59 PM, EtienneBachmann wrote:

Hi Dimitri,

This criterion is already used in both 2D and 3D versions, to avoid a loop over GLL points. Nevertheless, a double loop over spectral elements of all the domain ( not only the current MPI slice) remains. In the 2D case, I already removed the sqrt() criterion, but it is not enough. In a test, an adjoint simulation in specfem2D ran in 30s, meanwhile the smoothing was more than 30min. As long as there are significantly more GLL points in a spectral element in 3D ( leading to a higher efficiency of the distance criterion), the current fortran version might be sufficient in 3D.

What I suggest for the 3D case is to remove the sqrt(), and looping over all MPI slices instead of the neighbours only. If some users complain about the new computation time, a GPU solution will be available, by adapting a routine i ll post in the next weeks for the 2D case. The only remaining point, to my opinion, is the overweighting of high valency points that should be corrected, or, at least, suggested as an option.

Best Etienne

— Reply to this email directly or view it on GitHub https://github.com/geodynamics/specfem3d/issues/621#issuecomment-146243520.

Dimitri Komatitsch CNRS Research Director (DR CNRS), Laboratory of Mechanics and Acoustics, UPR 7051, Marseille, France http://komatitsch.free.fr

komatits commented 8 years ago

From Youyi Ruan:

I just went through Etienne and Daniel’s discussion about smoothing. I think my question is related to that topic but not the same. When we smooth the kernel, we assume the valence points shared by neighbor slices/partitions has the same value, however that might not true for our kernel, in other words the points (i,j,k,ispec) associated to the same iglob at boundary of partitions/slices may not have the same value.

Please correct me if I am wrong, I think the values at these points need to be assembled (averaged) like we did for displacement after the kernel calculation. If what I reported is true, this can be done within specfem3d_globe or in the post-processing but I prefer the former since it’s much easier.

For the version of code and example issue, I remember we discussed before but have not come up with a good solution.

To Dimitri, please allow me to explain the reason why we stick with a relative old version rather than the latest one. We think it would be better use a certain well benchmarked version with all the modules/libraries fixed rather than changing through the course to make sure the consistence, and saving time to do the benchmark. The current version we are using is last committed at Jul 1, 2016. In regard to the kernel example, I think it is possible for the 2D code but it would be too large for 3D code. In addition, it’s hard to run large adjoint simulations on Tiger for buildbot test. But we do routinely compare the forward and adjoint results when ever we recompile/update our source code/modules. As alternative, we might be able to generate kernel example with one chunk simulation for 3D global code.

To Ebru, for a easy test first, may I suggest to just visualize two neighboring slices in any of your raw kernel at high resolution (output kernel on GLL points) and see if you have the same issue? I will look my kernels again at different depth. Thanks.

komatits commented 8 years ago

From Etienne @EtienneBachmann

Indeed, the kernel output is given at the element level, I guess this help to be consistent with the model parameter maps, which are also given at the element level as an input in Specfem. The interest of this element level representation is precisely this ability to have several values of parameter for a same GLL point, which enables a better representation of discontinuous models. In such a case, having discontinuous kernels also make sense, and a global level representation is not relevant.

I did not receive the raw kernel pdf and understood the related concern, but if some artifacts appear, this is the kernel postprocessing that should be adapted and not the contrary. Maybe the smoothing utility could be extended with an averaging option. Currently, the smoothing utility takes in account a discontinuous model in input, but outputs every time a continuous model because of this smoothing nature.

komatits commented 8 years ago

From Ebru @ebrubozdag

I have finally found a gradient similar to what Youyi is reporting (Youyi, these are also the gradients I provided you), apparently I do not observe it always for every gradient. You can see the sample raw and smoothed kernels at the “surface” and at “~12 km” attached.

Raw kernel at the surface shows a similar problem which seems to disappear after smoothing and by depth. I have not zoomed in or plotted in high resolution, etc..

However, I am not entirely sure what is plotted by Paraview at the “surface” since there is topography and ellipticity included in these plots. So rather than showing the entire loaded files, horizontal cross-sections should be taken I guess. But it still does not fix the problem since there is ellipticity in the current mesh (and I believe Youyi also has ellipticity). I do not say that this is because of ellipticity but one should also use spherical mesh to plot the models to make everything properly which we have done to plot the models in the Global Adjoint paper.

komatits commented 8 years ago

From Youyi Ruan:

In regard to Etienne's answer I think what I reported here is the artifacts along the slices’ boundary. I understand the output of model and kernel are at element level to preserve the internal discontinuities. However, the boundaries of partitions/slices are not internal discontinuities, they are completely a subjective choice to distribute the computing work to each MPI processor, i.e. they are different if you change NPROC_XI and NPROC_ETA. When the code outputs kernel, should the value of these points from different MPI processors be synchronized if they are different due to numerical artifacts?

In current smoothing code, the smoothing also done at each GLL point so if the boundary points shared by different slices(MPI processors) are not synchronized/assembled, the artifacts will be preserved to some extend. I guess these “noises" will then add up with increasing number of event kernels.

To Ebru’s concern, I think these points are shared by different MPI processors, i.e., they have identical coordinates, so the topography and ellipticity will not be a concern in this case. However, I am not sure how Paraview handle these points which have identical coordinates but different values, may be the first one it found. Also cut the model at certain depth in paraview requires interpolation which also help cover this issue.

komatits commented 8 years ago

Could it be the classical potential problem that I mentioned a few months ago (see the attached PDF; 1 should talk to 3): if you are smoothing with a Gaussian or any other type of window, you may have cases in which the window extends to non-neighboring MPI slices; this is currently not handled in the smoothing code, i.e. the result will be wrong if this happens.

I mention this because the probability for this is much higher near MPI slice boundaries. problematic_case_for_mpi_smoothing

komatits commented 8 years ago

From Youyi Ruan:

I think what we showed are raw kernels without any smoothing, so it probably not the problem you mentioned. But this problem might be our next concern when the smoothing length become too long and beyond the range of 8 neighboring slices which daniel implemented in the current code.

komatits commented 8 years ago

From Ebru @ebrubozdag

I just wanted to confirm that this is not a new issue as I observe the same on a raw event kernel from 2010, which should have been computed by v5.0 (it should indeed be the first version with a mesh that honours the crust).

Also note that to my observation the issue is more pronounced in oceans than continents (may also be concealed by higher density underneath continents, may be just my perception or something else..).

komatits commented 7 years ago

Etienne will work on that (or has solved it already? if so we should close this Git issue)

@EtienneBachmann

EtienneBachmann commented 7 years ago

Hi Dimitri, Regarding my original problem (valency), the problem is solved, instead of a using a mean weighted by gaussian coefficients, the 2D implementation uses the real GLL integration rule. For 3D, it is possible to uncomment lines in the source code to enable it. For the problems reported by Youyi and Ebru, I haven't worked on it so far. But by now, we can expect that the project of using another smoothing technique (Bessel) will solve that problem, as the heavily tested assemble_MPI() routines should be used.

komatits commented 7 years ago

Hi Etienne,

Thanks a lot! I fully agree. Thus let me close the Git issue.

Before closing it, do you think we should uncomment the lines in the 3D code that you mention? (if so please let me know which lines I should uncomment, or just submit a Git pull request with the change, and then I will close the Git issue).

Thanks, Dimitri.

On 04/25/2017 03:58 PM, EtienneBachmann wrote:

Hi Dimitri, Regarding my original problem (valency), the problem is solved, instead of a using a mean weighted by gaussian coefficients, the 2D implementation uses the real GLL integration rule. For 3D, it is possible to uncomment lines in the source code to enable it. For the problems reported by Youyi and Ebru, I haven't worked on it so far. But by now, we can expect that the project of using another smoothing technique (Bessel) will solve that problem, as the heavily tested assemble_MPI() routines should be used.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/geodynamics/specfem3d/issues/621#issuecomment-297038735, or mute the thread https://github.com/notifications/unsubscribe-auth/AFjDKUezsgwm0SpZooCBRf6jZ_OnNsdbks5rzfwGgaJpZM4F9Iu7.

-- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr

EtienneBachmann commented 7 years ago

I figured out that I added this this option with a Boolean hard coded. To enable Gaussian smoothing with the accurate quadrature rule in 3D, you just need to set the Boolean USE_QUADRATURE_RULE to true, line 154 of src/tomography/postprocess_sensitivity_kernels/smooth_sem.F90. This option was enabled by default using GPUs.

komatits commented 7 years ago

Hi Etienne,

Thanks a lot! I have set it to .true. by default and have moved it to setup/constants.h for clarity (I then closed the Git issue).

Best regards, Dimitri.

On 04/26/2017 01:07 AM, EtienneBachmann wrote:

I figured out that I added this this option with a Boolean hard coded. To enable Gaussian smoothing with the accurate quadrature rule in 3D, you just need to set the Boolean USE_QUADRATURE_RULE to true, line 154 of src/tomography/postprocess_sensitivity_kernels/smooth_sem.F90. This option was enabled by default using GPUs.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/geodynamics/specfem3d/issues/621#issuecomment-297190941, or mute the thread https://github.com/notifications/unsubscribe-auth/AFjDKc2ylnnXQ0KGmfPbGWSvFZ1mvFKrks5rznzCgaJpZM4F9Iu7.

-- Dimitri Komatitsch, CNRS Research Director (DR CNRS) Laboratory of Mechanics and Acoustics, Marseille, France http://komatitsch.free.fr

komatits commented 7 years ago

This external smoothing program is not used any more, now that we have developed the integrated inversion framework of Vadim @vmont . Thus closing the issue. cc Etienne @EtienneBachmann

rmodrak commented 5 years ago

Hi Etienne,

I've gotten several questions related to this issue,

Could you describe in a sentence or two what method you currently use for 3D smoothing?

Thanks, Ryan

EtienneBachmann commented 5 years ago

Hi Ryan,

I currently don't use the 3D smoothing, but if I had to I would use the GPU version as the speed-up difference is above 500 with respect to CPUs. I took a quick look at the code, it may be incorrect when running with very small smoothing radii. I am not sure also if I fully tested the compatibility with regular / irregular elements meshes, even if it is GPU coded. Some CPU-GPU comparisons on small meshes may be welcome. Is there anything else you'd like to know?

Etienne

bch0w commented 5 years ago

Hi Etienne,

I hope it’s okay to continue with this issue rather than open a new one. I'm working on tomography in New Zealand and have been discussing with Ryan, and others, issues I’ve encountered with the smoothing code, similar to some issues previously discussed here.

I noticed artifacts while smoothing kernels with a recent development version of the code 87a78bd. Tracing back through the history of smooth_sem.f90 I found a version in Jan 2018 ac99d4f that did not show these artifacts. It seems that the offending commit came in April 2018 e8a7dcf with the change to separate calculation of regular/ irregular elements.

The artifacts follow the boundaries of slices, making element boundaries visible in whatever is smoothed. Initially seen smoothing kernels on Trelis created meshes (both 85,000 and 785,000 element meshes), we thought this might be related to the kernels themselves, but artifacts also appear while smoothing the velocity model. A mesh I made with Meshfem3D (40,000 elements) also shows boundary related artifacts.

I attempted to recompile the latest devel version with the legacy smooth_sem.f90, but that didn't solve it, so it seems that it this is not isolated to the smoothing code. We are currently only running with parallel CPUs (NPROC as 144, 80 or 40, so far), so I can’t provide any comparisons to GPU runs.

Just wondering if you know what could be the cause of this? I also wonder what you consider “very small smoothing radii”? I have tested with a few different smoothing lengths but the artifacts are always visible. For reference these meshes are roughly 500km each side length. I’ve included some figures to illustrate what I’ve been seeing, as well as some log outputs from xsmooth_sem.

Thanks, Bryant Chow

  1. Smoothing on a Trelis created mesh (85,000 elements), comparing three different commits. The last row shows a recent devel version compiled with the legacy smooth_sem.f90

trelis_smooth_velocity_model_commit_comparisons

  1. Artifacts on a Meshfem3D coarse mesh (40,000 elements).

meshfem_smoothing_artefacts

  1. Artifacts on a higher resolution Trelis mesh (785,000 elements), showing the boundaries of different processor partitions.

trelis_hires_smooth_artefact_partition_bounds

  1. Log outputs from xsmooth_sem

xsmooth_sem_vs_ac99d4f.txt xsmooth_sem_vs_kernel_87a78bd.txt

EtienneBachmann commented 5 years ago

Hi, There is undoubtedly a new bug in the smoothing code. And indeed it is impossible to use the old smoothing routine with the new specfem, as they share a binary database that had to be updated for irregular/regular elements. I never used the smoothing CPU code, but the first test I would do with the new version is to turn the flag USE_QUADRATURE_RULE_FOR_SMOOTHING (in setup/constants.h) to false, recompile "smooth_sem" and run. That should solve the problem, while inducing very slight approximations.

Etienne

bch0w commented 5 years ago

Hi Etienne,

Thanks for your response, that worked; after changing the USE_QUADRATURE_RULE_FOR_SMOOTHING flag to .false., the artifacts do not appear in the smoothing, I've included some figures here for confirmation. For my own curiosity, do you know why the use of the quadrature rule was introducing these artifacts?

Thanks much, Bryant

trelis_composite meshfem_composite

EtienneBachmann commented 5 years ago

Hi,

Naturally, the quadrature rule is not supposed to introduce artifacts, and there is a bug to correct still to repair it. I'll take a look to it as soon as I can. Aside, the difference between the quadrature rule being true or false is the following :

Etienne

liuqinya commented 5 years ago

Hi Bryant and Etienne,

After Carl alerted us about the possible bug (from reading the above communications, it seems to be related to USE_QUADRATURE_RULE_FOR_SMOOTHING = .true., but from Etienne's description, in principle it should not have any problem), I chatted with my graduate student Tianshi, and he seemed to have independently spotted a typo in line 789 of smooth_sem.F90 during his research: https://github.com/geodynamics/specfem3d/blob/master/src/tomography/postprocess_sensitivity_kernels/smooth_sem.F90

It should be if (ispec_irreg /= 0) jacobianl = jacobian(i,j,k,ispec2) instead of if (ispec_irreg /= 0) jacobianl = jacobian(i,j,k,ispec)

When using external mesher, this typo gives rise to an "out of range" exception. For internal mesher, there's no exception but the jacobian for integral is wrong.

Note this little snippet of code is within the if statement of USE_QUADRATURE_RULE_FOR_SMOOTHING

which probably explains why when Bryant turned the flag off, the problem seems to go away.

Could you two check if this indeed is the case? And if we all agree, I think one of us should check in the change as soon as possible to avoid it affecting other users.

Qinya

danielpeter commented 5 years ago

good catch! the proposed modification is “almost” good (it needs ispec_irreg instead of ispec2), i’ll commit a fix for that asap. -daniel

On 18 Oct 2019, at 5:12 AM, Qinya Liu notifications@github.com wrote:

Hi Bryant and Etienne,

After Carl alerted us about the possible bug (from reading the above communications, it seems to be related to USE_QUADRATURE_RULE_FOR_SMOOTHING = .true., but from Etienne's description, in principle it should not have any problem), I chatted with my graduate student Tianshi, and he seemed to have independently spotted a typo in line 789 of smooth_sem.F90 during his research: https://github.com/geodynamics/specfem3d/blob/master/src/tomography/postprocess_sensitivity_kernels/smooth_sem.F90 https://github.com/geodynamics/specfem3d/blob/master/src/tomography/postprocess_sensitivity_kernels/smooth_sem.F90 It should be if (ispec_irreg /= 0) jacobianl = jacobian(i,j,k,ispec2) instead of if (ispec_irreg /= 0) jacobianl = jacobian(i,j,k,ispec)

When using external mesher, this typo gives rise to an "out of range" exception. For internal mesher, there's no exception but the jacobian for integral is wrong.

Note this little snippet of code is within the if statement of USE_QUADRATURE_RULE_FOR_SMOOTHING

which probably explains why when Bryant turned the flag off, the problem seems to go away.

Could you two check if this indeed is the case? And if we all agree, I think one of us should check in the change as soon as possible to avoid it affecting other users.

Qinya

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/geodynamics/specfem3d/issues/621?email_source=notifications&email_token=ABMLXWOLPJL4GZNQAGMBMA3QPESSVA5CNFSM4BPURO52YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBSLEAA#issuecomment-543470080, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMLXWIHK5J5DWKG5IMTFYTQPESSVANCNFSM4BPURO5Q.

danielpeter commented 5 years ago

should be fixed by PR #1350

bch0w commented 5 years ago

thanks @liuqinya for finding that typo and @danielpeter for the quick fix. I just wanted to confirm that smoothing in 75e1785, with USE_QUADRATURE_RULE_FOR_SMOOTHING=.true., seems to be working for both Trelis and Meshfem meshes, cheers! smoothing_75e1785

danielpeter commented 5 years ago

great, thanks for testing and the nice pictures :)

liuqinya commented 5 years ago

Hi Daniel and Bryant,

Thanks for testing it and wonderful to see it works well now. It is really thanks to my graduate student Tianshi Liu, but also thank you for bringing this bug to our attention!

Qinya

On Fri, Oct 25, 2019 at 3:24 AM daniel peter notifications@github.com wrote:

great, thanks for testing and the nice pictures :)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/geodynamics/specfem3d/issues/621?email_source=notifications&email_token=ABMUYT6ZLYP7I6WISGLCBWLQQKNMTA5CNFSM4BPURO52YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECHOQVA#issuecomment-546236500, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMUYT4DWABD73F5ZNED6PTQQKNMTANCNFSM4BPURO5Q .

gaoyajian commented 4 years ago

Hey, dear specfem experts, I try to use smooth function from specfem3d_globe, but it runs so slow, is it normal? or is there any replacements for this? thanks