Open AndrewReynoso opened 2 months ago
The Green's function is only printed in the fciqmc_stats file, columns 22 onwards. The popsfile.h5 contains only information on the (perturbed) wavefunction at the very last step, while the time-evolved Green's function needs to be printed at every tau-step. The file fciqmc_stats should be printed automatically once the real-time evolution is selected. Could you please add the input file here?
At this stage I suppose that you first run kmneci without the real-time implementation to obtain the groundstate and, only then, you use it as input for the real-time evolution.
I am able to output the popsfile and even the tauContour file, but for some reason the fciqmc_stats file is not output automatically, even though I can achieve this for kmneci. Here I provide two input scripts: the first pertaining to the file I input into kmneci in order to generate the popsfile corresponding to the groundstate, and the second corresponding to the file I use to evolve the popsfile with the realtime block (again with kmneci).
system hubbard k-space
electrons 4
nonuniformrandexcits pchb delocalised
lattice square 2 2
U 0.1
B 1
# open-bc open
endsys
calc
nmcyc 10000
# for reproducibility
seed 8
totalWalkers 50000
diagShift 0
tau-values \
start user-defined 0.002 \
min 0.001 \
max 0.003
# use the initiator method
truncinitiator
addtoinitiator 3
methods
method vertex fcimc
endmethods
endcalc
logging
hdf5-pops
popsfile
endlog
And the second:
system hubbard k-space
electrons 4
nonuniformrandexcits pchb delocalised
lattice square 2 2
U 0.1
B 1
# open-bc open
endsys
calc
readpops
nmcyc 1000
# for reproducibilitya
seed 8
totalWalkers 50000
diagShift 0
tau-values \
start from-popsfile \
#tau-search \
#algorithm conventional \
#stop-condition no-change 500
# use the initiator method
truncinitiator
addtoinitiator 3
methods
method vertex fcimc
endmethods
endcalc
realtime
greater 11 11
noshift
rotate-time 0
dynamic-rotation
stabilize-walkers
log-trajectory
#rt-pops
endrealtime
logging
hdf5-pops
popsfile
endlog
end
Thank you very much for your input. I was not able to reproduce the issue that you have. In my tests the fcimc_stats file was correctly present after the real-time execution with the given Green's function columns. I am using OpenMPI 3.1.6 and GCC-7.5 with glibc 2.26. Which compiler are you currently using?
Thank you for the response. Your comments nudged me to take a closer look at the compilers I had used during the build process on the cluster I am using; after recompiling, the issue has been resolved.
It turns out this was not completely resolved. I have used some example realtime .inp files from your github and have managed to produce a fciqmc_stats file for the first time, but for my own listed .inp files the fciqmc_stats file is not output. I am using hpcx-mpi 4.1.5 as is recommended by my cluster (the OSCAR computing cluster at Brown University). Openmpi 4.1.2 is available on my cluster but is not recommended. I do not believe I will be able to utilize Openmpi 3.1.6 on this cluster. To your knowledge, is there an issue using a later version of Openmpi? I can try recompiling again with this more recent version.
I just tested your input with:
In all cases it prints the fciqmc_stats file without issues. Which compiler are you using? What's the compiler of hpcx-mpi?
I believe the compiler is gcc. This is the output of ompi_info:
Package: Open MPI psaluja@login009 Distribution
Open MPI: 4.1.5rc2
Open MPI repo revision: v4.1.5rc1-17-gdb10576f40
Open MPI release date: Unreleased developer copy
Open RTE: 4.1.5rc2
Open RTE repo revision: v4.1.5rc1-17-gdb10576f40
Open RTE release date: Unreleased developer copy
OPAL: 4.1.5rc2
OPAL repo revision: v4.1.5rc1-17-gdb10576f40
OPAL release date: Unreleased developer copy
MPI API: 3.1.0
Ident string: 4.1.5rc2
Prefix: /oscar/runtime/software/hpcx-mpi/4.1.5rc2/hpcx-ompi/
Configured architecture: x86_64-pc-linux-gnu
Configure host: login009
Configured by: psaluja
Configured on: Fri Nov 3 17:08:16 UTC 2023
Configure host: login009
Configure command line: '--prefix=/oscar/runtime/software/hpcx-mpi/4.1.5rc2/src/hpcx-v2.16-gcc-mlnx_ofed-redhat9-cuda12-gdrcopy2-nccl2.18-x86_64/hpcx-ompi'
'--with-hcoll=/oscar/runtime/software/hpcx-mpi/4.1.5rc2/src/hpcx-v2.16-gcc-mlnx_ofed-redhat9-cuda12-gdrcopy2-nccl2.18-x86_64/hcoll'
'--with-ucx=/oscar/runtime/software/hpcx-mpi/4.1.5rc2/src/hpcx-v2.16-gcc-mlnx_ofed-redhat9-cuda12-gdrcopy2-nccl2.18-x86_64/ucx'
'--with-platform=contrib/platform/mellanox/optimized'
'--with-pmi=/usr' '--with-pmi-libdir=/lib64'
'--with-pmix=/usr' '--with-pmix-libdir=/lib64'
'--with-libevent' '--with-singularity'
'--with-slurm' '--with-hwloc'
Built by: psaluja
Built on: Fri Nov 3 05:16:49 PM UTC 2023
Built host: login009
C bindings: yes
C++ bindings: no
Fort mpif.h: yes (all)
Fort use mpi: yes (full: ignore TKR)
Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: yes
Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
limitations in the gfortran compiler and/or Open
MPI, does not support the following: array
subsections, direct passthru (where possible) to
underlying Open MPI's C functionality
Fort mpi_f08 subarrays: no
Java bindings: no
Wrapper compiler rpath: runpath
C compiler: gcc
C compiler absolute: /usr/bin/gcc
C compiler family name: GNU
C compiler version: 11.3.1
C++ compiler: g++
C++ compiler absolute: /usr/bin/g++
Fort compiler: gfortran
Fort compiler abs: /usr/bin/gfortran
Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
Fort 08 assumed shape: yes
Fort optional args: yes
Fort INTERFACE: yes
Fort ISO_FORTRAN_ENV: yes
Fort STORAGE_SIZE: yes
Fort BIND(C) (all): yes
Fort ISO_C_BINDING: yes
Fort SUBROUTINE BIND(C): yes
Fort TYPE,BIND(C): yes
Fort T,BIND(C,name="a"): yes
Fort PRIVATE: yes
Fort PROTECTED: yes
Fort ABSTRACT: yes
Fort ASYNCHRONOUS: yes
Fort PROCEDURE: yes
Fort USE...ONLY: yes
Fort C_FUNLOC: yes
Fort f08 using wrappers: yes
Fort MPI_SIZEOF: yes
C profiling: yes
C++ profiling: no
Fort mpif.h profiling: yes
Fort use mpi profiling: yes
Fort use mpi_f08 prof: yes
C++ exceptions: no
Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes,
OMPI progress: no, ORTE progress: yes, Event lib:
yes)
Sparse Groups: no
Internal debug support: no
MPI interface warnings: yes
MPI parameter check: never
Memory profiling support: no
Memory debugging support: no
dl support: yes
Heterogeneous support: no
mpirun default --prefix: yes
MPI_WTIME support: native
Symbol vis. support: yes
Host topology support: yes
IPv6 support: no
MPI1 compatibility: no
MPI extensions: affinity, cuda, pcollreq
FT Checkpoint support: no (checkpoint thread: no)
C/R Enabled Debugging: no
MPI_MAX_PROCESSOR_NAME: 256
MPI_MAX_ERROR_STRING: 256
MPI_MAX_OBJECT_NAME: 64
MPI_MAX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
Thanks for the info. I tested your configuration as close as I could and everything was fine. I think I have maybe figured out the origin of the issue. Are you maybe running the Green's function calculation in the same directory of the groundstate one?The neci program for the groundstate checks at the beginning of the execution whether fciqmc_stats
is already existing and, if so, will back it up to fciqmc_stats.1
and save the new output under a new fciqmc_stats
file.
The case for real-time execution is different. The program creates a new fciqmc_stats
if this is not existing, while it will append the output if fciqmc_stats
is already existing. So the new output will be found at the bottom of the file fciqmc_stats
. Can you run the real-time code without an existing fciqmc_stats
file and check after? Or run from an other directory? Alternatively, check whether the Green's function data has been appended to your previous fciqmc_stats
.
I can run kmneci with the realtime block, but the only file that is outputted is popsfile.n.h5 (n is an integer). As I need the time-evolution of the Green's function, am I confused as to whether I need an additional line/command in the .inp file in order to obtain the fciqmc_stats file and the initiator_states sile, or if I am meant to use popsfile.n.hf in some capacity. Any help would be appreciated.