Closed amartinhuertas closed 3 years ago
Hey @amartinhuertas , nice pictures :) Have you also looked at the times for the solve() step?
Hey @amartinhuertas , nice pictures :) Have you also looked at the times for the solve() step?
No. But I didnt because I dont expect to be any significant difference in the timings (even with a random ordering, neither of our orderings are random, though). Note that we are using a sparse direct solver. Sparse direct solvers compute a fill-in reducing ordering of the matrix, typically nested dissection (https://en.wikipedia.org/wiki/Nested_dissection) prior to factorization. Most fill-in reducing ordering generate a solution of this NP problem which does not depend on the original ordering. For an iterative solver, there could be more difference. Specially the matrix-vector products. You can evaluate that if you are interested.
PS: dont forget to get the latests commits that I pushed to Gridap.jl#DIV_support_for_fe_functions
. To this end, you have to run update
in the Julia's REPL package manager (working on your branch of GridapGeosiences).
OK, apologies for sending you on a wild goose chase here, I wasn't aware of the fill-in ordering of a sparse direct solve (I have only really worked with Krylov solvers in the past).
Thanks for the reminder, yes, I'm aware I need to pull in your lastest DIV commits, I just haven't gotten round to it yet...
Merging #14 (0b8d6a5) into master (f7aadf4) will increase coverage by
3.00%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## master #14 +/- ##
==========================================
+ Coverage 43.35% 46.35% +3.00%
==========================================
Files 10 10
Lines 286 302 +16
==========================================
+ Hits 124 140 +16
Misses 162 162
Impacted Files | Coverage Δ | |
---|---|---|
src/CubedSphereDiscreteModels.jl | 99.00% <100.00%> (+0.18%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update c60d1c8...0b8d6a5. Read the comment docs.
OK, apologies for sending you on a wild goose chase here, I wasn't aware of the fill-in ordering of a sparse direct solve (I have only really worked with Krylov solvers in the past).
I think the initial ordering was not that bad from a locality point of view, this might justify why there is almost no difference in the computational timings during assembly. If you take a random ordering, e.g., I think we should be able to appreciate more difference. In any case I found a bug along the way in Gridap.jl that I have fixed.
Also note that for 2D problems sparse direct solvers (on single core computers) are highly competitive. They might be a bit slower than an iterative solver, but in terms of robustness there is no color.
Tests passed at last! Accepting PR ...
Thanks for the reminder, yes, I'm aware I need to pull in your lastest DIV commits, I just haven't gotten round to it yet...
Just in case ... (sorry If I am repetitive) ... you have to (1) git pull from main in GridapGeosciences. (2) update Gridap.jl in package manager. This will pull the latest devs into the Gridap.jl repo that the package manager internally manages.
Related to https://github.com/gridapapps/GridapGeosciences.jl/issues/13#issuecomment-907974537 and https://github.com/gridapapps/GridapGeosciences.jl/issues/13#issuecomment-907997231
@davelee2804 @cbladwell @santiagobadia
FYI, in this PR I am setting up the
CubedSphereDiscreteModel
s such that we have the desired panel-wise, lexicographic within each panel, numbering of cells (see below).Along the way, I have discovered what I think it is a bug in
Gridap.jl
, and PRed there (https://github.com/gridap/Gridap.jl/pull/651). For your convenience, the changes in this PR are already available atDIV_support_for_fe_functions
. Thus if you work with the latterGridap.jl
branch, you will get these changes as well.For the records, I have measured the computational times of the assembly stage for the Darcy problem on the cubed sphere using either ordering (i did not measure, matrix-vector product, if you like, you can do it). Results below. There you will see that the impact of the ordering on the assembly stage is almost negligible. This can be because the data locality of the original ordering may not actually be that bad. In any case, I think it is reasonable to have the new ordering, to avoid extra noise in the performance discussions.