FreeFem / FreeFem-sources

FreeFEM source code
https://freefem.org/
Other
797 stars 193 forks source link

Problem with display, save and generation of matrix #170

Closed pmollo closed 2 years ago

pmollo commented 3 years ago

Hi, First of all, there is two small bugs with matrix matrices display and save. When you use cout and ofstream, matrix coefficients have three extra digits. These extra digits are random and can cause rounding error. I attach a program to show this problem. ExampleDigitsProblem.edp.txt

Then an other bug concerns the matrices generated by varf : these matrices are filled with lots of zeros. This, of course, slow down all sparse operations (matrix product, solve, load, save, etc.), generates small errors and uses memory unnecessarily. In addition, using vectorial fespace generates even more zeros. I attach an example of this problem. ExampleMatrix.edp.txt In large scale problems, I have matrices that are composed with more than 80% of zero coefficients. More over, when we use Mat Apetsc(Affpp) to setup a PETSc matrix, these zero coefficients aren't remove.

I made a "cleaning" program with octave: it is slow but it fix the zero problem. I think that it will be convenient to have a cleaning routine somewhere. Thanks for your help.

prj- commented 3 years ago

These extra digits are random and can cause rounding error

What rounding error? These values are below the double precision machine epsilon.

This, of course, slow down all sparse operations

By how much? Can you quantify this?
Also, of course, that's not a bug. It's because your mesh is tiny, so Dirichlet boundary conditions are imposed almost everywhere, so almost all but diagonal coefficients are set to zero. Just increase the number of degrees of freedom and your lines won't be filled with zeros anymore.

I made a "cleaning" program with octave

There is one already available in FreeFEM: A.thresholding(1.0e-12);

pmollo commented 3 years ago

There is one already available in FreeFEM: A.thresholding(1.0e-12);

Thanks, I didn't know this command, this is very convenient.

These extra digits are random and can cause rounding error

What rounding error? These values are below the double precision machine epsilon.

Ok, it's good to know that is just a display problem.

This, of course, slow down all sparse operations

By how much? Can you quantify this? Also, of course, that's not a bug. It's because your mesh is tiny, so Dirichlet boundary conditions are imposed almost everywhere, so almost all but diagonal coefficients are set to zero. Just increase the number of degrees of freedom and your lines won't be filled with zeros anymore.

I add thresholding(1e-30) in the previous program (on every matrix) and a timer, the solving goes ten time faster with the thresholding and the error is exactly zero as I expect.

ExampleMatrix.edp.txt

prj- commented 3 years ago

the solving goes ten time faster with the thresholding and the error is exactly zero as I expect.

I don't know what solver you are using, but it's pretty bad then. Here is what I get with PARDISO.

$ FreeFem++ ExampleMatrix.edp.txt -Dthre=1 -v 0
0.221569s
nnz [P2]x[P2] space :60802
nnz [P2,P2]   space :60802
Erreur : 7.771561172376096e-15
$ FreeFem++ ExampleMatrix.edp.txt -v 0
0.547078s
nnz [P2]x[P2] space :141202
nnz [P2,P2]   space :282404
Erreur : 1.958433415438776e-13
pmollo commented 3 years ago

Do you still have a difference in time with several executions ? I try with sparsesolver (which is UMFPACK ?) :

$ xff ExampleMatrix.edp -Dthre=1 -v 0 
0.02778700000000001s
nnz [P2]x[P2] space :60802
nnz [P2,P2]   space :60802
Erreur : 2.225073858507201e-308
$ xff ExampleMatrix.edp -v 0
0.8765799999999999s
nnz [P2]x[P2] space :141202
nnz [P2,P2]   space :282404
Erreur : 6.52811138479592e-14

and with MUMPS:

$ xff ExampleMatrix.edp -v 0 -Dthre=1
init MUMPS_SEQ: MPI_Init
0.176366s
nnz [P2]x[P2] space :60802
nnz [P2,P2]   space :60802
Erreur : 2.225073858507201e-308
close  MUMPS_SEQ: MPI_Finalize
$ xff ExampleMatrix.edp -v 0
init MUMPS_SEQ: MPI_Init
1.741237s
nnz [P2]x[P2] space :141202
nnz [P2,P2]   space :282404
Erreur : 8.171241461241152e-14
close  MUMPS_SEQ: MPI_Finalize

I also try with CG,Cholesky and LU but in these cases it's just 2 to 3 time faster. Unfortunately, I don't know how to use PARDISO, but I am interest. Thanks for your help.

frederichecht commented 3 years ago

Dear user,

PARDISO come with the MKL intel library, and generally with ifort compiler.

Best Regards,

Frédéric Hecht.


Laboratoire Jacques-Louis Lions, UPMC Sorbonne Université BC187, 4 Place Jussieu, 75252 PARIS cedex 05, France Campus Jussieu, Barre 15-25, 3 etage Bureau 307 Projet Alpines , Inria de Paris, 2 rue Simone Iff Voie DQ12 75012 Paris tel: +33 1 44274411, mob: +33 6 62198986, fax: +33 1 44277200 @.*** https://www.ljll.math.upmc.fr/hecht software: FreeFem++ web site: http://www.freefem.org/ff++

Le 11 avr. 2021 à 22:54, Pmollo @.***> a écrit :

Do you still have a difference in time with several executions ? I try with sparsesolver (which is UMFPACK ?) :

$ xff ExampleMatrix.edp -Dthre=1 -v 0 0.02778700000000001s nnz [P2]x[P2] space :60802 nnz [P2,P2] space :60802 Erreur : 2.225073858507201e-308 $ xff ExampleMatrix.edp -v 0 0.8765799999999999s nnz [P2]x[P2] space :141202 nnz [P2,P2] space :282404 Erreur : 6.52811138479592e-14 and with MUMPS:

$ xff ExampleMatrix.edp -v 0 -Dthre=1 init MUMPS_SEQ: MPI_Init 0.176366s nnz [P2]x[P2] space :60802 nnz [P2,P2] space :60802 Erreur : 2.225073858507201e-308 close MUMPS_SEQ: MPI_Finalize $ xff ExampleMatrix.edp -v 0 init MUMPS_SEQ: MPI_Init 1.741237s nnz [P2]x[P2] space :141202 nnz [P2,P2] space :282404 Erreur : 8.171241461241152e-14 close MUMPS_SEQ: MPI_Finalize I also try with CG, Cholesky and LU but in these cases it's just 2 to 3 time faster. Unfortunately, I don't know how to use PARDISO, but I am interest. Thanks for your help.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/FreeFem/FreeFem-sources/issues/170#issuecomment-817371162, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLPNDHMQDBOV5DYOOOJPMDTIIEAVANCNFSM42X6CPPA.

prj- commented 3 years ago

You can download it for free there. Then, you'll need to tell to either FreeFEM or PETSc to use it during configuration. Anyway, if you care about performance, it is best to switch to parallel computing for such problems.

pmollo commented 3 years ago

Thanks for the information about PARDISO, I will try this as soon as possible.

Anyway, if you care about performance, it is best to switch to parallel computing for such problems.

You're right, I will try to. Is there something similar to A.thresholding(..) for PETSc matrices ? or maybe it's useless ?

prj- commented 3 years ago

You can just apply A.thresholding(..) before passing A to a PETSc Mat.

pmollo commented 3 years ago

You can just apply A.thresholding(..) before passing A to a PETSc Mat.

Is it possible to do that with distributed matrices ? With something like this ?

mesh Th = ...
builDmesh(Th,...)
matrix A = vA(Yh,Yh); 
A.thresholding(..)
Mat Apetsc;
createMat(Th,A,Pk);
Apetsc = A;

I'm sorry, I'm beginner in FreeFEM-PETSc framework.

prj- commented 3 years ago

Yes, that's what I meant initially. No need to be sorry for asking honest questions :)

gdolle commented 3 years ago

Hi @Pmollo, @prj-,

I have just read the topic, I guess the first "extra digits round problem" would rather be reformulated to "FF I/O .precision() method does not work for matrix type". (It comes from these lines https://github.com/FreeFem/FreeFem-sources/blob/develop/src/femlib/HashMatrix.hpp#L472-L474 which set the minimum precision to at least 20 digits).

PS: maybe it could be split into 2 issues ;)