OrderN / CONQUEST-release

Full public release of large scale and linear scaling DFT code CONQUEST
http://www.order-n.org/
MIT License
96 stars 25 forks source link

Added initial tests #184

Closed davidbowler closed 11 months ago

davidbowler commented 12 months ago

Simple eight atom bulk Si (one atom moved off site to ensure non-zero forces) for both diagonalisation and linear scaling.

davidbowler commented 11 months ago

I think that this basic initial framework is pretty much ready to be merged. I would propose that we develop further tests in their own issue/branch/pull request environment.

I wonder if we should add a little detail to the testsuite README to describe how to add new tests (essentially add lines to the python code and workflow Makefile, I think). Once we've done this, it should be OK to merge and close.

tkoskela commented 11 months ago

Looks good to me, thanks for fixing the broken things in the workflow!

I added variable number of decimals for each item we test. We can adjust these if needed. Is 4 decimals for the stress enough? That feels like a fairly low precision.

Good point about the README, I will modify it. I might need you to add some basic description about what the tests do.

davidbowler commented 11 months ago

How easy would it be to customise the tests? For instance, I will want to introduce tests which are specific to one part of the code that isn't always run (so we'd maybe need a different keyword to be added for that test)

tkoskela commented 11 months ago

What do you want to customise? I think the current structure where every test has its own input in a separate directory is good, because the inputs can be cutomized to each test. If we need to customise the build, we can run anything we want in the steps on GitHub actions but making things automatically work elsewhere becomes a bit challenging.

I think it might be best to merge this PR soon and add more tests in separate PRs.

davidbowler commented 11 months ago

I want to be able to search for a special keyword in some tests but not others (e.g. "Total polarisation" in one case only)

tkoskela commented 11 months ago

Ohh, I see, I'm sure we can do that. Yes, we can skip undersired parameter combinations or mark combinations as expected to fail https://stackoverflow.com/questions/47914934/py-test-skip-depending-on-parameters