Closed bjpalmer closed 7 months ago
I got the kundar two area test to run using python. It crashes after running the powerflow calculation. I tried running the same case using the convention C++ code and it also crashes after running the powerflow calculation, although it may be for a different reason. Can anyone tell me which version of the dynamic simulation code I should be using (dfs.x or dsf2.x)?
@bjpalmer can you send us the error messages? If you use dsf.x or dsf2.x to run the case, you may remove multiple raw and dyr files and just keep the one you are running.
I forgot to mention, I eliminated all the raw files except for the Benchmark files. This is what I get when I run
Bus Number Phase Angle Voltage Magnitude
1 0.000000 1.030000
2 -9.766951 1.010000
3 -27.084535 1.030000
4 -37.274247 1.010000
5 -6.462884 1.006450
6 -16.549156 0.978119
7 -24.959148 0.960998
8 -38.833091 0.948586
9 -52.434145 0.971364
10 -44.019435 0.983462
11 -33.710662 1.008258
Monitoring generators:
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Signal received
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.16.3, Jan 05, 2022
[0]PETSC ERROR: dsf.x on a linux-openmpi-gnu-cxx-complex-opt-so named constance03.pnl.gov by d3g293 Fri Dec 8 10:04:16 2023
[0]PETSC ERROR: Configure options PETSC_ARCH=linux-openmpi-gnu-cxx-complex-opt-so --with-scalar-type=complex --download-superlu_dist --download-superlu --download-parmetis --download-metis --download-suitesparse --download-f2cblaslapack --with-mumps=0 --with-clanguage=c++ --with-fortran=0 --with-fortran-kernels=0 --with-shared-libraries=1 --with-cxx-dialect=C++11 --with-x=0 --with-mpiexec=mpiexec --with-debugging=0
[0]PETSC ERROR: #1 User provided function() at unknown file:0
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 5 DUP FROM 3
with errorcode 59.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
I'm running on a single process using the two-sided runtime (with shared libraries)
Bruce: The code that reads in the dyr file from xml file, readGenerators
in dsf_app_module.cpp
, is bit convoluted and screwed up IMHO. There are two ways in which a user can specify the dyr (and) raw file in the xml file
Both these cases are not being handled correctly and this should be fixed. There is also ambiguity in the xml tags used for the dyr file <generatorParameters>
, <generatorParams>
. There should be consistency in this code AND the xml input files.
To fix this correctly, we need to modify
applications/datasets/input
folder to have a consistent format for the xml tag used.Btw, what input.xml file are you using?
If @bjpalmer or @yliu250 will provide me with the problem data set, I will debug the Python interface, or find blame somewhere else.
I've already modified the input file to run just one raw file (the Benchmark_twoarea_v33.raw file) and associated .dyr file. This is the file @yliu250 sent me and it is the file generating the above output.
@yliu250 : Yuan, can you please try this branch again and let Bill and Bruce if there are any issues.
The Python interface does not seem to be the problem here.
The export34 module does not appear to work with python interface.