SPECFEM / specfem3d_globe

SPECFEM3D_GLOBE simulates global and regional (continental-scale) seismic wave propagation.
GNU General Public License v3.0
90 stars 95 forks source link

remove static compilation #502

Open komatits opened 8 years ago

komatits commented 8 years ago

From Daniel @danielpeter :

Shall we remove the static compilation part in the globe version? i got some feedback from users and also spoke with the cluster system administrators here. they would prefer having a version which you compile once and then use the binaries for all global/regional simulation setups. especially the cluster people would prefer having a version which they could compile in an optimized way on their system and then their users can just rely on that system available binary, rather than re-compiling their own version.

this would imply that we move from static to dynamic array allocations. this is already done in the 3D_Cartesian version. it might lead to some performance hit. if i remember correctly for the Cartesian version, tests i did showed about a 5% to 10% slow down. we would have to re-evaluate for the globe version, but that should be acceptable.

what is your take on that?

komatits commented 8 years ago

From Matthieu @mpbl :

Sounds like a great idea. I heard the same complaints from a number of users.

komatits commented 8 years ago

Yes, sounds like a good plan; this has become obsolete / not really needed any more, and the flexibility we would gain is worth the 5% to 10% slowdown (also keeping in mind that the GPU version will be unaffected).

Of course if we do this it probably also means that we should reconsider the issue of merging 3D_GLOBE and 3D_Cartesian into a single code, since (keeping the 3D_GLOBE mesher as is) there will be almost no reason left to have two solvers.

(almost all the complexity of 3D_GLOBE is in the mesher, in the solver there are only a few things about rotation and gravity, easy to cut and paste in 3D_Cartesian).

Dimitri.

komatits commented 8 years ago

From Matthieu @mpbl :

A couple of other thoughts:

• Maybe it would be easier, if instead of merging, there is a “core” library that can be used by both Cartesian and Globe

• The current par_file is kind of obfuscated and in a strange format. It would be a lot user friendly to have something like yaml. There is excellent and simple to use parsers (e.g. boost property tree).

komatits commented 8 years ago

I agree with the second point, I have added it to the Git issue. A long time ago with Jeroen we had a student who had developed a Tcl-Tk interface to the Par_file, but for some reason we never switched to it. As you say it could be time to switch to something more modern than ASCII + a text editor (at least as an option). In such a case, the script should also crosscheck different options to see if they are mutually exclusive etc, to avoid having simulations that wait in batch and then stop immediately with a stop statement in read_check_parameter_file() or similar.

I am not so sure about the first point. I agree it would work, but I think it could be more work to maintain two codes + a shared kernel than merging them; since only the mesher really contains different stuff, the solver is almost the same (it takes an existing decomposed mesh and loops on it basically).

Best wishes, Dimitri.

komatits commented 8 years ago

From Matthieu @mpbl :

You can look at: https://github.com/mpbl/specfem_mockup For the second point. It is a piece of code I use to avoid running the real specfem when testing various thing. It has a json par_file; probably not the most user friendly format. Parsing is actually quite easy: https://github.com/mpbl/specfem_mockup/blob/master/src/specfem_mockup.cpp#L44

komatits commented 8 years ago

From Daniel @danielpeter :

yes, i totally agree that it would be great to have a simpler simulation input.

regarding the input format however, i think the important part must be the look-and-feel for users. the way you input simulation parameters must be as easy as possible. (it’s less a question of how easy it is for the developer to write a code which reads in these parameters. so i don’t think yaml- or json-format make it much simpler for users to change the input parameter file, unless you use a special editor)

well, if we look at some popular, scientific codes (meaning they are usually pretty big users on cluster time):

uses ASCII-input file:

title           = PFV_DNA
; Run parameters
integrator      = md            ; leap-frog integrator
nsteps          = 10000 ; 2 * 500000 = 1000 ps, 2 ns
dt              = 0.002         ; 1 fs
init_step = 0.00
; Output control
nstxout         = 20000         ; save coordinates every 2 ps
nstvout         = 20000         ; save velocities every 2 ps
nstxtcout       = 1000          ; xtc compressed trajectory output every 2 ps
..

uses ASCII-configuration file:

# protocol params
numsteps        1000

# initial config
coordinates     alanin.pdb
temperature     300K
seed            12345

# output params
outputname      /tmp/alanin
binaryoutput    no
..

uses ASCII-input file:

&GLOBAL                  ! section to select the kind of calculation
   RUN_TYPE ENERGY       ! select type of calculation. In this case: ENERGY (=Single point calculation)
&END GLOBAL
&FORCE_EVAL              ! section with parameters and system description
  METHOD FIST            ! Molecular Mechanics method
  &MM                    ! specification of MM parameters 
    &FORCEFIELD          ! parameters needed to describe the potential 
    &SPLINE
..
&control
    pseudo_dir = ’./’,
    outdir=’./tmp/’,
    prefix=’be0001’
    tprnfor = .true.
/ 
&system
ibrav=4, celldm(1)=4.247, celldm(3)=16.0, nat=12, ntyp=1, nbnd=20, occupations=’smearing’, smearing=’marzari-vanderbilt’, degauss=0.05 ecutwfc=22.0
/
&electrons
/
..

well, unless we want to build a GUI, the simple ascii-parfile-format doesn’t look so bad to me...

komatits commented 8 years ago

cc'ing Vadim @vmont for information.

komatits commented 8 years ago

I agree that it is important to keep using ASCII format (only) as input, for portability reasons, and also keeping in mind that many users use automatic tools and scripts to generate Par_files automatically when running a large number of simulations, thus such scripts would stop working.

However creating a script (in Python or similar?) that would make it easier to edit Par_files based on a GUI and that would perform crosschecks could / would help.

Here is the old one in Tcl-Tk from 2000 I was taking about above: specfemGUI9_10_final_rory.tcl.txt

komatits commented 8 years ago

From Hom Nath @homnath :

That is a great idea. To add to the Daniel's list, for our image segmentation and reconstruction package (recently started) we use the following file format:

#pre information
preinfo: method='sem', nproc=1, ngllx=3, nglly=3, ngllz=1, nenod=4, ngnod=4, &
inp_path='./input', out_path='./output/'

#mesh information
mesh: xfile='image2d_coord_x', yfile='image2d_coord_y',
zfile='image2d_coord_z', &
confile='image2d_connectivity', idfile='image2d_material_id'

#boundary conditions
bc: uxfile='image2d_ssbcux', uyfile='image2d_ssbcuy',
uzfile='image2d_ssbcuz'

#initial stress
#stress0: type=1, z0=10, s0=0

#traction
#traction: trfile='image2d_trfile'

#material list
material: matfile='lennaBW.vti', type=1, ispart=0

#image parameters
image: eps=0.0001, gam=10.0, eta=10.0

#control parameters
control: cg_tol=1e-5, cg_maxiter=2000, nl_tol=1e-4, nl_maxiter=50,
ntstep=5, dt=1, ninc=1

#save data
save: disp=1, edge=1

Which is similar to YML format that Matthieu pointed out. In this format some tags and variables may be optional.

For my case, where we have several physics such as glacial rebound, normal mode, cloth simulation, postseismic relaxation, and so on Matthieu's suggestion of common library seems to be a great idea!

Best regards, Hom Nath

komatits commented 8 years ago

I actually don’t think that YAML is more complicated than ASCII. It makes it a lot easier to create scripts to generate or edit parameter file as it is natively supported in python / ruby /… and naturally maps to dictionaries. For the checking part, you have at least syntax check for free.

I agree that not breaking users’ existing script is important, but what currently happens when you add a new parameter to the Par_file?

Best Regards, Matthieu @mpbl

komatits commented 8 years ago

If you are considering Python, have you thought about whether you could use Jupyter notebooks in some fashion?

Lorraine Hwang

komatits commented 8 years ago

To add the feedback already receiveed, I agree with Dimitri. Unless someone is willing to assume full responsibility for this aspect of the code and invest the time that would be required to improve parameter file parsing, make it consistent across 2D, 3D and 3D_GLOBE versions, and maintain these routines going forward, then file format is probably better left alone.

Best regards, Ryan @rmodrak

komatits commented 8 years ago

One aspect that could be improved more easily I think is the choice of parameters themselves. For example, in SPECFEM2D, there used to be three different parameters to control the format of the model when there could have been just one, making the code harder to maintain and things more complicated for users. This problem has since been fixed, but similar problems remain. Such issues arise when new features get added in piecemeal fashion. Probably, the lead developers should feel free to ask users to submit pull requests in such cases and withhold approval until any issues have been corrected. Isn't this what makes pull-request function of github so powerful and widely used? Just one related thing to consider.

Thanks, Ryan @rmodrak

komatits commented 8 years ago

I have little to do with SPECFEM directly but I quite strongly agree with Matthieu.

Reasons for this from the top of my head:

(1) Nobody creates a parameter file from scratch. I assume everyone will just take an existing file from the examples and modify the parameters. At that point it does IMHO no longer matter if its ASCII or some other format.

(2) You could create a schema to validate the input files - these also allow for more complex constraints - more importantly users could validate the parameter files BEFORE they run SPECFEM and be sure that the validation is close to identical to the one SPECFEM would perform internally.

(3) Syntax errors can also be caught immediately.

(4) I've written a couple of auto-generators for SPECFEM input files. Having a schema would vastly simplify that as it would be very easy to test the generation without even having to run SPECFEM. This by extension also makes it easier to script and integrate SPECFEM.

(5) Autogenerating scripts have to be updated in any case any time SPECFEM adds a new parameter. I don't think its a big issue to just change the output format at that point.

(6) A schema could be used to autogenerate some web or TK GUI that automatically is always valid and up to date.

Regarding the format: I guess the only sensible choices are JSON or YAML. JSON has better schema definitions but YAML officially allows for comments which are kind of crucial (in JSON it depends a bit on the parser).

Cheers!

Lion Krischer

komatits commented 8 years ago

Hi Lion,

Makes sense! Thanks for your feedback.

Best wishes, Dimitri.

komatits commented 8 years ago

To Ryan @rmodrak :

Yes, if we do it for one code we need to do it for the two others as well.

(another reason to merge 3D_GLOBE into 3D ;-)

komatits commented 8 years ago

hi Matthieu, hi all,

right, YAML might be a good way to continue and structure the Par_files.

to see a short example for the current globe parameter file, it would then become something like:

#-----------------------------------------------------------
#
# YAML - Simulation input parameters
#
#-----------------------------------------------------------

simulation:
  # forward or adjoint simulation
  SIMULATION_TYPE:    1     # set to 1 for forward simulations, 2 for adjoint simulations for sources, and 3 for kernel simulations
  NOISE_TOMOGRAPHY:   0     # flag of noise tomography, three steps (1,2,3). If earthquake simulation, set it to 0.
  SAVE_FORWARD:       No    # save last frame of forward simulation or not

  # record length in minutes
  RECORD_LENGTH_IN_MINUTES: 2.5d0

mesh:
  # number of chunks (1,2,3 or 6)
  NCHUNKS:            1

  # number of elements at the surface along the two sides of the first chunk
  # (must be multiple of 16 and 8 * multiple of NPROC below)
  NEX_XI:   64
  NEX_ETA:  64

  # number of MPI processors along the two sides of the first chunk
  NPROC_XI:           2
  NPROC_ETA:          2

regional:
  # angular width of the first chunk (not used if full sphere with six chunks)
  ANGULAR_WIDTH_XI_IN_DEGREES:    20.d0   # angular size of a chunk
  ANGULAR_WIDTH_ETA_IN_DEGREES:   20.d0
  CENTER_LATITUDE_IN_DEGREES:     40.d0
  CENTER_LONGITUDE_IN_DEGREES:    25.d0
  GAMMA_ROTATION_AZIMUTH:         0.d0

earth_model:
  # e.g. s20rts_1Dcrust, s362ani_1Dcrust, etc.
  MODEL: '1D_isotropic_prem'

physics:
  # parameters describing the Earth model
  OCEANS:       Yes
  ELLIPTICITY:  Yes
  TOPOGRAPHY:   Yes
  GRAVITY:      Yes
  ROTATION:     Yes
  ATTENUATION:  Yes

etc. unfortunately, the etcetera is getting longer and longer… something also Ryan notices. and i agree, if we address this issue with input format, we might have to address also the growth of input parameters in recent years. at the moment, for example the globe input parameters are:

  SIMULATION_TYPE
  NOISE_TOMOGRAPHY
  SAVE_FORWARD
  NCHUNKS
  ANGULAR_WIDTH_XI_IN_DEGREES
  ANGULAR_WIDTH_ETA_IN_DEGREES
  CENTER_LATITUDE_IN_DEGREES
  CENTER_LONGITUDE_IN_DEGREES
  GAMMA_ROTATION_AZIMUTH
  NEX_XI
  NEX_ETA
  NPROC_XI
  NPROC_ETA
  MODEL
  OCEANS
  ELLIPTICITY
  TOPOGRAPHY
  GRAVITY
  ROTATION
  ATTENUATION
  ABSORBING_CONDITIONS
  RECORD_LENGTH_IN_MINUTES
  PARTIAL_PHYS_DISPERSION_ONLY
  UNDO_ATTENUATION
  MEMORY_INSTALLED_PER_CORE_IN_GB
  PERCENT_OF_MEM_TO_USE_PER_CORE
  EXACT_MASS_MATRIX_FOR_ROTATION
  USE_LDDRK
  INCREASE_CFL_FOR_LDDRK
  RATIO_BY_WHICH_TO_INCREASE_IT
  MOVIE_SURFACE
  MOVIE_VOLUME
  MOVIE_COARSE
  NTSTEP_BETWEEN_FRAMES
  HDUR_MOVIE
  MOVIE_VOLUME_TYPE
  MOVIE_TOP_KM
  MOVIE_BOTTOM_KM
  MOVIE_WEST_DEG
  MOVIE_EAST_DEG
  MOVIE_NORTH_DEG
  MOVIE_SOUTH_DEG
  MOVIE_START
  MOVIE_STOP
  SAVE_MESH_FILES
  NUMBER_OF_RUNS
  NUMBER_OF_THIS_RUN
  LOCAL_PATH
  LOCAL_TMP_PATH
  NTSTEP_BETWEEN_OUTPUT_INFO
  NTSTEP_BETWEEN_OUTPUT_SEISMOS
  NTSTEP_BETWEEN_READ_ADJSRC
  OUTPUT_SEISMOS_ASCII_TEXT
  OUTPUT_SEISMOS_SAC_ALPHANUM
  OUTPUT_SEISMOS_SAC_BINARY
  OUTPUT_SEISMOS_ASDF
  ROTATE_SEISMOGRAMS_RT
  WRITE_SEISMOGRAMS_BY_MASTER
  SAVE_ALL_SEISMOS_IN_ONE_FILE
  USE_BINARY_FOR_LARGE_FILE
  RECEIVERS_CAN_BE_BURIED
  PRINT_SOURCE_TIME_FUNCTION
  READ_ADJSRC_ASDF
  ANISOTROPIC_KL
  SAVE_TRANSVERSE_KL_ONLY
  APPROXIMATE_HESS_KL
  USE_FULL_TISO_MANTLE
  SAVE_SOURCE_MASK
  SAVE_REGULAR_KL
  NUMBER_OF_SIMULTANEOUS_RUNS
  BROADCAST_SAME_MESH_AND_MODEL
  USE_FAILSAFE_MECHANISM
  GPU_MODE
  GPU_RUNTIME
  GPU_PLATFORM
  GPU_DEVICE
  ADIOS_ENABLED
  ADIOS_FOR_FORWARD_ARRAYS
  ADIOS_FOR_MPI_ARRAYS
  ADIOS_FOR_ARRAYS_SOLVER
  ADIOS_FOR_SOLVER_MESHFILES
  ADIOS_FOR_AVS_DX
  ADIOS_FOR_KERNELS
  ADIOS_FOR_MODELS
  ADIOS_FOR_UNDO_ATTENUATION

we got 85 parameters. :(

with YAML at least, we could start structuring them further. also, we could setup a set of default values for a globe, cartesian and 2D version (other codes like BigDFT just use that approach). then the values defined in the Par_file would overwrite those. probably for most user simulations, the Par_file could be kept shorter that way.

best wishes, daniel @danielpeter

ps. and yes, merging as much as possible would help maintaining the code. it is however ugly in github to put the common source code into a submodule and have that submodule as a standalone (CIG?) repository. changes in the sub-repository would affect all other three main repos (2D, 3D, 3D_globe) and could break any of them. main point here is that pull requests to the sub-repo should trigger buildbot and travis in the main-repos. however, i don’t see that happen in github.

pps. and no, i haven’t created a Jupyter notebook example yet, but plan to do something for my next fall semester teaching… :)

komatits commented 8 years ago

We currently use Fortran namelists in Par_file_faults, the SPECFEM3D input file for dynamic rupture simulations. We have been also happy using them in our 2D code SEM2DPACK. Here are some advantages:

It's a Fortran standard.

The compiler does the parsing for you, so it's very light for the programmer.

The syntax is user-friendly. It allows for comments, shorthand notations, default values. You can define default values for individual parameters and for whole namelist blocks. This often results in smaller and less cluttered input files, especially in codes with multiple applications and options. It also makes it easy to introduce new input parameters while keeping compatibility with older input files.

There are python parsers for Fortran namelists (f90nml), if you need to automate processes.

Jean Paul (Pablo) Ampuero.

komatits commented 8 years ago

it looks like we have actually two threads, where the initial one about the static compilation seems to be quickly decided. so i’ll go and check if we can make it dynamic. for the second discussion thread about input files and format, it looks like we can do more debating about it, but all your points are well taken :)

the last email from Pablo mentions Fortran namelists and that makes much sense especially when you have a somewhat formatted input, like the fault parameters at all fault node locations. i think Percy will acknowledge that for more realistic simulations, you end up writing your own script which produces your Par_file_faults. so it moves the heavy user task of editing the input file directly to modifying your generating script. Also Lion points toward that direction, that as a user you’re not editing the Par_file directly anymore, but use higher-level script/tools to generate them. okay, lots of possibilities and we can discuss also which generating script/tool is the most user friendly…(and probably soon we’ve got a webpage to do that where we can use voice recognition to just say it…)

anyway, to make it easier for higher-level tools/frameworks and generating scripts (and future GUIs), a structured input parameter file based on YAML would be my favourite at the moment. we can make that change for a next upcoming version and move away from the old ascii-format by providing some simple conversion script for older par_files

to that end, i use the EXAMPLES/process_DATA_Par_files_to_update_their_parameters_from_a_master.py already to convert old Par_file formats to the new format when someone is adding a parameter. that python script reads in all parameters into an ordered dictionary already. so it should be fairly easy to output a YAML version from that as well.

best wishes, daniel @danielpeter

komatits commented 8 years ago

In my experience, the strength of Fortran namelists is not in "somewhat formatted inputs", but the opposite: highly unformatted, flexible, adaptable inputs. If an input is very structured (e.g. a velocity model or friction parameters on a grid) the main input file could look like:

  &MATERIAL tag="sediment A", read_from = "my_model_A.txt" , interpolate="linear"/

and then the code would read the values from a separate structured file. For a homogeneous material the input would be:

  &MATERIAL tag="sediment B", cp=2e3, cs=500, rho=2e3, Q=30 /

For a medium with a combination of homogeneous and heterogeneous properties it would be:

  &MATERIAL tag="sediment C", Q=20, rho=2e3, read_from="my_model_C.txt", cp_column=4, cs_column=5,
interpolate="none" /

Cheers,

Jean Paul (Pablo) Ampuero

komatits commented 8 years ago

Jean-Paul, I am a bit confused. The approach of using Fortran definitions requires to recompile your code every time you change your parameters, am I right? It seems to be something you can do for rarely changing parameters (e.g. setup/constant.h) but quite cumbersome for parameters specific to a given simulation.

Matthieu @mpbl

komatits commented 8 years ago

Let us go with more modern formats such as YAML I guess.

komatits commented 8 years ago

not necessarily, we already read in the Par_file_faults without the need to recompile the code. it’s just reading in the configuration file with fortran-code and allocating arrays when needed.

and Pablo is right, namelists give you quite some options to modify things. my point however is, namelists use structuring elements: /, &, .. to make it machine-readable, but less user readable.

a line like:

&MATERIAL tag="sediment A", read_from = "my_model_A.txt" , interpolate="linear"/

in YAML would be:

MATERIAL:
    tag:              'sediment A'
    read_from:   'my_model_A.txt'
    interpolate:   linear

so i would prefer YAML, which uses less of these “technical” elements and makes it more “human-readable” (their slogan).

for a namelist example, i was looking at the tpv5/Par_file_faults example which uses:

..
&BEGIN_FAULT /
&STRESS_TENSOR Sigma=0e0,0e0,0e0,0e0,0e0,0e0/
&INIT_STRESS S1=70.0e6, n1=3, S2=0.0e0,S3=-120.0e6 /
&DIST2D shapeval='square', val = 78.0e6, xc = -7500.0e0, yc =0e0, zc=  -7500.0e0, l=3000.0e0 /
&DIST2D shapeval='square', val = 81.6e6, xc =       0e0, yc =0e0, zc=  -7500.0e0, l=3000.0e0 /
&DIST2D shapeval='square', val = 62.0e6, xc =  7500.0e0, yc =0e0, zc=  -7500.0e0, l=3000.0e0 /
..

i don’t think as a user you want to edit many of these line. it’s however very short and concise. that’s why it makes more sense for such a fault configuration. the main Par_file however needs less of these structures, and more of single options/flag choices.

it’s however less of a question how to read it in with fortran code, but how to make it easier for overlying scripts to automate the process of creating the configuration files. motivation for the YAML-structured input format would be to facilitate overlying scripting/frameworks, as mentioned by Matthieu, Lion and Hom-Nath. since YAML can be easily used in for example python with PyYAML (http://pyyaml.org/wiki/PyYAML), the reading of a Par_file into a dictonary would look like:

#!/usr/bin/env python

import yaml

with open("Par_file.yml", 'r') as ymlfile:
    cfg = yaml.load(ymlfile)

for section in cfg:
    print "section name:",section," -- number of properties:",len(cfg[section])
    print cfg[section]
    print ""

which would readily output you:

$ ./read_Par_file_yaml.py
section name: earth_model  -- number of properties: 1
{'MODEL': '1D_isotropic_prem'}

section name: regional  -- number of properties: 5
{'CENTER_LONGITUDE_IN_DEGREES': '25.d0', 'ANGULAR_WIDTH_ETA_IN_DEGREES': '20.d0', 'GAMMA_ROTATION_AZIMUTH': '0.d0', 'CENTER_LATITUDE_IN_DEGREES': '40.d0', 'ANGULAR_WIDTH_XI_IN_DEGREES': '20.d0'}
..

anyway, the output here is not the goal, it’s the reading in into a python dictionary which is three lines (including the import) and possible modifications and outputs which are the easy part. in my python script EXAMPLES/process_DATA_Par_files_to_update_their_parameters_from_a_master.py it needs about 200 lines for reading in the ascii formats. (well, not quite a fair comparison, since that script deals with some more options, but it goes in that direction)

a while ago, there was a pyrized version of SPECFEM3D_GLOBE input files:

# example parameter file for the Pyrized verion of Specfem 3D Globe
######################################################################
[Specfem3DGlobe]                                  ; general parameters

# model (isotropic_prem, transversely_isotropic_prem,
# iaspei, s20rts, Brian_Savage, Min_Chen)
model = isotropic_prem

######################################################################
[Specfem3DGlobe.mesher]                            ; mesher parameters

# number of chunks (1,2,3 or 6)
nchunks = 6
..

this is i think in a so-called INI-format which could be read in by ConfigParser:

#!/usr/bin/env python
import ConfigParser
import io

# Load the configuration file
with open(“Par_file.cfg") as f:
    sample_config = f.read()
config = ConfigParser.RawConfigParser(allow_no_value=True)
config.readfp(io.BytesIO(sample_config))
..

not quite sure though why this was developed and now died out again at CIG… maybe Dimitri you know?

best wishes, daniel @danielpeter

jpampuero commented 8 years ago

A Fortran namelist file can also be read into a python dictionary with two lines of code (http://f90nml.readthedocs.io/en/latest/):

import f90nml
nml = f90nml.read('Par_file.nml')

I agree with Daniel that a YAML input file has a leaner look and feel. If you also care about how light the Fortran code looks like, maybe namelists have an advantage. In my example the code would be

namelist /MATERIAL/ tag, cp, cs, rho, Q, read_from, cp_col, cs_col, rho_col, Q_col, interpolate 

!set default values
tag = ""
cp = 0d0
...
read_from = ""
cp_col=4
...
interpolate="none"

read(iin,NML=MATERIAL)

! check and process inputs 

Is there a recommended Fortran parser library for YAML?

danielpeter commented 8 years ago

no, YAML (http://yaml.org) has no fortran support listed.

you would probably have to use a C-function wrapper like from LibYAML: http://pyyaml.org/wiki/LibYAML

or modify/use the YAML-parser fortran-module developed within BigDFT: http://bigdft.org/devel-doc/d7/d3b/group__FLIB.html

o Fortran ... Where Art Thou now :) https://www.youtube.com/watch?v=OdYGnAFaeHU

komatits commented 8 years ago

This thread is in fact two different issues; we should split it.