Closed heikef closed 5 years ago
What is the operating system, Python, CMake, and compiler version on which it fails. Which test(s) fail(s)?
Test no 5 and 7. Operating systems Mac and Ubuntu executed within Windows.
How do they fail? And what is the compiler version?
I am using python 3 (anaconda). In the test directory the calculation already finished but int the test run window it takes forever until the next test is started. acid.vti and jmod.vti files are identical only jvec.vti differs. However, the difference is very small - zeros to a precicion of E-17 or E-18.
0.927876E-18 35841c35841 < 0.171839E-17
0.155364E-17 36682c36682 < 0.348675E-17
0.339492E-17 41728c41728 < 0.285767E-17
0.274839E-17 42569c42569 < 0.171694E-17
0.155370E-17 43410c43410 < 0.964185E-18
0.927207E-18
This is only numerics.
The calculation finished after: 34.12sec while the test output tells: ***Failed 551.91 sec How can this be? This discrepancy is huge.
Which part of the code reports "***Failed 551.91 sec"? Alternatively please post fuller output so that I can see where this is reported.
make test leads to this output
make test Running tests... Test project /Users/heike/source/github-gimic/fork/gimic/build Start 1: benzene/integration-gauss 1/22 Test #1: benzene/integration-gauss ......... Passed 3.79 sec Start 2: benzene/integration-lobatto 2/22 Test #2: benzene/integration-lobatto ....... Passed 3.82 sec Start 3: benzene/vectors 3/22 Test #3: benzene/vectors ................... Passed 1.87 sec Start 4: benzene/2d 4/22 Test #4: benzene/2d ........................ Passed 1.73 sec Start 5: benzene/3d 5/22 Test #5: benzene/3d ........................Failed 551.91 sec Start 6: benzene/keyword-magnet 6/22 Test #6: benzene/keyword-magnet ............ Passed 3.84 sec Start 7: benzene/3d-keyword-magnet 7/22 Test #7: benzene/3d-keyword-magnet .........Failed 579.76 sec Start 8: benzene/2d-keyword-magnet 8/22 Test #8: benzene/2d-keyword-magnet ......... Passed 2.04 sec Start 9: benzene/keyword-spacing 9/22 Test #9: benzene/keyword-spacing ........... Passed 13.19 sec Start 10: benzene/keyword-rotation 10/22 Test #10: benzene/keyword-rotation .......... Passed 2.12 sec Start 11: benzene/keyword-rotation_origin 11/22 Test #11: benzene/keyword-rotation_origin ... Passed 2.39 sec Start 12: benzene/keyword-radius 12/22 Test #12: benzene/keyword-radius ............ Passed 2.32 sec Start 13: benzene/int-grid-bond-even 13/22 Test #13: benzene/int-grid-bond-even ........ Passed 0.52 sec Start 14: benzene/int-cdens 14/22 Test #14: benzene/int-cdens ................. Passed 2.32 sec Start 15: benzene/diamag-off 15/22 Test #15: benzene/diamag-off ................ Passed 2.05 sec Start 16: benzene/paramag-off 16/22 Test #16: benzene/paramag-off ............... Passed 2.14 sec Start 17: benzene/giao-test 17/22 Test #17: benzene/giao-test ................. Passed 1.34 sec Start 18: c4h4/integration 18/22 Test #18: c4h4/integration .................. Passed 1.03 sec Start 19: c4h4/read-grid 19/22 Test #19: c4h4/read-grid .................... Passed 8.97 sec Start 20: open-shell/3d 20/22 Test #20: open-shell/3d ..................... Passed 132.98 sec Start 21: open-shell/integration 21/22 Test #21: open-shell/integration ............ Passed 9.63 sec Start 22: benzene/skip-jmod-integration 22/22 Test #22: benzene/skip-jmod-integration ..... Passed 2.05 sec
91% tests passed, 2 tests failed out of 22
Total Test time (real) = 1332.02 sec
The following tests FAILED: 5 - benzene/3d (Failed) 7 - benzene/3d-keyword-magnet (Failed) Errors while running CTest make: *** [test] Error 8
From the output I would expect that this is how long the individual tests took. If they took significantly less, it's strange.
****************************************************************
*** ***
*** GIMIC 2.1.4 (90822aa) ***
*** Written by Jonas Juselius ***
*** ***
*** This software is copyright (c) 2003-2011 by ***
*** Jonas Juselius, University of Tromso. ***
*** ***
*** You are free to distribute this software under the ***
*** terms of the GNU General Public License ***
*** ***
*** A Pretty Advanced 'Hello World!' Program ***
****************************************************************
Thu Jul 4 15:24:37 2019
TITLE:
INFO: Detected TURBOMOLE input
Number of atoms = 12
Normalizing basis
Total number of primitive GTO's 354
Total number of contracted GTO's 252
*** Calculating screening coefficients
INFO: Screening threshold: 0.1000E-07
INFO: Reordering densities [TURBOMOLE]
Grid mode = base
Number of grid points <v1,v2>: 30 30 30
Total number of grid points : 27000
*** Grid plot in grid.xyz
INFO: Closed-shell calculation
Calculating current density
*****************************************
INFO: Estimated CPU time for single core calculation: 34.51 sec ( 0.0 h )
magnetic field
0.0000000000000000 0.0000000000000000 1.0000000000000000
*** Deallocated grid data
INFO: Deallocated basis set and atom data
----------------------------------------------------------------------
wall time: 34.12sec ( 0.0 h )
user: 33.90sec ( 0.0 h )
sys: 0.21sec ( 0.0 h )
----------------------------------------------------------------------
Thu Jul 4 15:25:11 2019
Hello World! (tm)
done.
This is F-GIMIC.
Indeed. This is VERY strange.
May you check if you happen to have similar issues on your local machine?
I would vote for ignoring this. It's important though that we fix the numerical tolerance.
Waiting 10 min for a test to finish is very inconvenient. What do you suggest for fixing the numerical tolerance? f = [get_filter(rel_tolerance=1.0e-8)] to 1.0E-4 ?
OK then I misunderstood. I thought the timing printout was wrong. Not that it took so long. In this case we should not ignore. And I assume the automated test and you ran the job differently.
What do you suggest for fixing the numerical tolerance? f = [get_filter(rel_tolerance=1.0e-8)] to 1.0E-4 ?
Can you please link to the file that needs to be adjusted? Then I can look at the numbers.
Sure. https://github.com/qmcurrents/gimic/blob/master/test/benzene/3d/test
However, I am not so sure if we really need to edit this one.
Concerning the test: I opened two windows one with executing "make test" in the other one I went to the build/test directory where the test script ran the actual calculation. There I saw that the calculation had finished but the program did not realize....
I think you want to rather do this: https://runtest.readthedocs.io/en/latest/creating/filter_options.html#how-to-ignore-very-small-or-very-large-numbers
With this option we should ignore very small numbers. I think here we should not change rel_tolerance
.
For the different timing, please open a new issue since it's another problem.
I agree. The program has problems comparing two "zero" numbers or "super small" numbers.
Thank you. I will see if I can fix it.
Thank you. I will see if I can fix it.
Let me know if you need help with this.
I need a co-tester. Do you run into the same problems?
I need a co-tester. Do you run into the same problems?
No, I do not see these problems but I am confident that I can solve these. I will send you here a patch which you can try locally and if it works, you can submit a pull request using my upcoming change.
Try this:
diff --git a/test/benzene/3d-keyword-magnet/test b/test/benzene/3d-keyword-magnet/test
index 0c68ddf..636254e 100755
--- a/test/benzene/3d-keyword-magnet/test
+++ b/test/benzene/3d-keyword-magnet/test
@@ -12,7 +12,7 @@ assert version_info.major == 2
options = cli()
# we check entire files
-f = [get_filter(rel_tolerance=1.0e-8)]
+f = [get_filter(rel_tolerance=1.0e-8, skip_below=1.0e-14)]
ierr = run(options,
configure,
diff --git a/test/benzene/3d/test b/test/benzene/3d/test
index 0c68ddf..636254e 100755
--- a/test/benzene/3d/test
+++ b/test/benzene/3d/test
@@ -12,7 +12,7 @@ assert version_info.major == 2
options = cli()
# we check entire files
-f = [get_filter(rel_tolerance=1.0e-8)]
+f = [get_filter(rel_tolerance=1.0e-8, skip_below=1.0e-14)]
ierr = run(options,
configure,
This works. Thank you. I will file soon a pull request.
For some reason these tests fail after updating my gimic version. I got suspicious and did a complete new clone and the problem remains. Could it be that we need to set a tolerance delta x for checking because depending on the machines and operating systems the numerics might not be identical? On another machine I had the same problem. Therefore I think it is something more serious and not related to my local setup. Could someone please test and tell if he/she has the same problems? Thank you! It is strange that all tests passed on the TravisCI though....