Closed mondracek closed 5 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 46.73%. Comparing base (
063cc9a
) to head (c1bbb7a
). Report is 2 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Will close #243
@mondracek You forgot to add any reviewers for the PR on the right side. Maybe you want to add Aurelio?
Thanks for the reminder @NikoOinonen . I thought without explicitly adding reviewers, it's like "anyone who cares may review". Or does it work the way that only those selected as reviewers can unblock merging by their review? Anyway, I added @aureliojgc and also you, in case Aurelio does not have time to do it. I do not think much review is needed; checking that the examples work was the very point of this issue and that is what I did. Perhaps you may check if you like the way I have put the input data (hartree*cube) up to Zenodo. I did not ZIP it to an archive, so I have not made a new entry in ppafm/data.py
either. Perhaps I should have done this. Also, I have not uploaded the output anywhere; now I see it may be useful to put the output to Zenodo too, for reference. Should I do it?
@mondracek
I thought without explicitly adding reviewers, it's like "anyone who cares may review". Or does it work the way that only those selected as reviewers can unblock merging by their review?
If I recall, anyone that is a part of the organization can approve any PR. However, usually the reviewing duties should be divided by the relevant expertise and time availability. Probably most people are not going to go out of their way to review every PR if it was not assigned to them, so adding reviewers will get it done faster
Perhaps you may check if you like the way I have put the input data (hartree*cube) up to Zenodo. I did not ZIP it to an archive
It's maybe preferable to compress it, because it can result in a significant reduction in file size, which is nice when working on slow connections. But these files are not that huge, so maybe it's not that big of a deal.
Also, I have not uploaded the output anywhere; now I see it may be useful to put the output to Zenodo too, for reference. Should I do it?
I haven't done this for any of the other datasets, but maybe it can be useful to make sure that the output remains the same over time.
Sorry for the changes after approval.
I have modified params.ini
for the CH3Br_KPFM
example by
klat
to 0.2, which is more realistic than 0.5 for a Xe tip and coincides with the value that we used in our older worksscanMin
, as KPFM is typically done with larger tip-sample separation, out of the high-resolution AFM regime.However, I have found a more serious issue: The KPFM seems to differ a lot depending on whether one uses FHI-AIMS or VASP to calculate the Hartree potential. I will make a new issue about that if needed, after I learn a bit more about what is going on and generate some examples of the problem. But I prefer to postpone merging this KPFM example until the matter is clarified.
@mondracek - is there a reason, why this is not Squashed and merged ? Do you want one more review ?
@ondrejkrejci, the reason why this PR was kept open because I wanted to clarify the conflict between FHI-AIMS-based and VASP-based results before merging the example. I am now pretty sure the problem was with VASP. As the example added to the examples/
folder by this PR uses the *.cube
files from FHI-AIMS, it should be all right and I am going to merge it now.
Just for the record: The problem with VASP seems to be that one should not use the RMM-DIIS algorithm for energy minimization (IALGO=48
or ALGO=Fast
) when calculating the electron density and Hartree potential of free standing molecules in an external homogeneous electric field. Especially not if only one electronic self-consistency cycle is to be performed with definite atom positions, without geometry optimization. Instead, blocked-Davidson iteration scheme (IALGO=38
, ALGO=Normal
) should be used. The latter (blocked-Davidson iteration) algorithm is the default in newer versions of VASP (since VASP 4.6) anyway, so it's not a big deal. The problem was that I was used to explicitly specifying IALGO=48
in my INCAR
because that algorithm tends to be faster, as its other name (ALGO=Fast
) suggests. With IALGO=48
, dipole compensation along the z axis on (LDIPOL=.TRUE.
, IDIPOL=3
) , non-zero external field (EFIELD=-0.1
, typically) and no relaxation of atom positions (IBRION=-1
), the induced dipole turns out to be unstable; you can get almost anything depending on who knows what details of initial conditions. The problem is not documented anywhere as far as I know so I have just learnt about the necessity of using the default algorithm for electron minimization the hard way.
The bad new is that I have also used IALGO=48
for the simulation of KPFM of the FFPB molecule in our paper. The good news is that I was lucky, it did not cause any substantial problem for that particular molecule (in contrast to this CH3Br). I have checked that the LCPD images obtained with ALGO=Normal
are virtually the same as those obtained with IALGO=48
in the case of FFPB.
fixes #243
Added
examples/CH3Br_KPFM
and successfully tested bothexamples/CH3Br_KPFM
andexamples/FFPB_KPFM
.