Open wawiesel opened 6 days ago
Here are my reproducibility steps for the w17x17 assembly model (UOX).
git clone https://github.com/wawiesel/olm.git olm2
cd olm2
ls
This includes installing requirements in a virtual environment in directory venv
, downloading data files for testing, and running all tests. It may take up to
source dev.sh
cd collection/w17x17
more config.olm.json
The configuration file controls everything. You should understand what each part is doing. It is described in the manual: https://scale-olm.readthedocs.io/en/stable/config-file.html.
Here is the unaltered configuration file:
{
"model": {
"name": "w17x17",
"description": "Library Westinghouse 17x17 design",
"sources":{},
"revision": ["1.0"],
"notes":[]
},
"generate": {
"_type": "scale.olm.generate.root:jt_expander",
"template": "model.jt.inp",
"comp": {
"_type": "scale.olm.generate.comp:uo2_simple",
"density": 10.4
},
"static": {
"_type": "scale.olm.generate.static:pass_through",
"addnux": 4,
"xslib": "xn252"
},
"states": {
"_type": "scale.olm.generate.states:full_hypercube",
"coolant_density": [
0.723
],
"enrichment": [
0.5,
1.5,
2,
3,
4,
5,
6,
7,
8,
8.5
],
"ppm_boron": [
630
],
"specific_power": [
40
]
},
"time": {
"_type": "scale.olm.generate.time:constpower_burndata",
"gwd_burnups": [
0.0,
0.04,
1.04,
3.0,
5.0,
7.5,
10.5,
13.5,
16.5,
19.5,
22.5,
25.5,
28.5,
31.5,
34.5,
37.5,
40.5,
43.5,
46.5,
49.5,
52.5,
55.5,
58.5,
61.5,
64.5,
67.5,
70.5,
73.5,
76.5,
79.5,
82.5
]
}
},
"run": {
"_type": "scale.olm.run:makefile",
"dry_run": false
},
"assemble": {
"_type": "scale.olm.assemble:arpdata_txt",
"fuel_type": "UOX",
"dim_map": {"mod_dens": "coolant_density", "enrichment": "enrichment"},
"keep_every": 1
},
"check": {
"_type": "scale.olm.check:sequencer",
"sequence": [
{
"_type": "scale.olm.check:LowOrderConsistency",
"name": "loc",
"template": "model/origami/system-uox.jt.inp",
"target_q1": 0.70,
"target_q2": 0.95,
"eps0": 1e-12,
"epsa": 1e-6,
"epsr": 1e-3,
"nuclide_compare": ["0092235","0094239","0094240","0094241","0094242"]
}
]
},
"report": {
"_type": "scale.olm.report:rst2pdf",
"template": "report.jt.rst"
}
}
This configuration requests 10 different enrichments, which will be 10 different TRITON files. You can observe these by running just the generate stage of the reactor library creation:
olm create --generate config.olm.json
You should see something like this:
In this case, there are 10 "permutation" directories representing the various grid of statepoints. In this case, the other state variables only have one point on their grids (moderator density and boron) so we only get 10 inputs for the enrichment cases. If we had 2 moderator densities, we would get 10x2=20 different permutation inputs. Each permutation at this point should just have a data file data.olm.json
and an input file model_<hash>.inp
where <hash>
is the last 6 characters of the full hash. Hashes are used so that we don't rerun the same model twice.
ls _work/perms/bf1b1f260d7397cf974dac7e0ddb9ac8
data.olm.json model_db9ac8.inp
This data file data.olm.json
has all the data needed for this perturbation:
{
"static": {
"addnux": 4,
"xslib": "xn252"
},
"comp": {
"density": 10.4,
"uo2": {
"iso": {
"u235": 0.5,
"u238": 99.5,
"u234": 1e-20,
"u236": 1e-20
}
},
...
You can see that this is the 0.5 wt% case by the "u235": 0.5
Note that OLM is designed with one command necessary to perform all the stages with olm create config.olm.json
, however for local testing/debugging, it is nice to be able to run individual stages. Each one of these calculations can take about 2 hours. They are parallelizable with, for example -j10
to run all at once. However, for testing purposes, let's reduce the number of statepoints AND number of burnups we are requesting to have some results in minutes.
Let's modify this data in config.olm.json to only have two enrichments 4,8.5 and 3 burnups.
"states": {
"_type": "scale.olm.generate.states:full_hypercube",
"coolant_density": [
0.723
],
"enrichment": [
4,
8.5
],
"ppm_boron": [
630
],
"specific_power": [
40
]
},
"time": {
"_type": "scale.olm.generate.time:constpower_burndata",
"gwd_burnups": [
0.0,
3.0,
5.0
]
}
Now you'll need to do olm create --generate config.olm.json
again to update the files in the _work
directory. If you don't, there's no way for OLM to know that you reduced the grid. You can run both generate and run with
olm create --generate --run -j4 config.olm.json
This should start only two inputs despite giving 4 cores (-j4
) because there are only two enrichments to run now.
While it's running you can inspect the _work
directory. The tree
command is great for an overview:
tree _work
_work
├── env.olm.json
├── generate.olm.json
└── perms
├── 8408e06c7d27b4eee0e966c2ec7f657d
│ ├── data.olm.json
│ ├── model_7f657d.inp
│ ├── model_7f657d.msg
│ ├── model_7f657d.out
│ └── model_7f657d.tempdir.path
├── 8408feea26683c3d54b386b55adc5ff7
│ ├── data.olm.json
│ ├── model_dc5ff7.inp
│ ├── model_dc5ff7.msg
│ ├── model_dc5ff7.out
│ └── model_dc5ff7.tempdir.path
└── Makefile
4 directories, 13 files
Wherever you see a *.tempdir.path
file, it means a calculation is running. You can use tree -f
to see full paths. Choose one of the directories with a tempdir.path and use tail -f
to print the last lines and "follow" new lines as they are output.
tail -f _work/perms/8408e06c7d27b4eee0e966c2ec7f657d/model_7f657d.msg
You may see something like this:
=====================================================================================
Outer iteration sweep begins.
Outer Eigen- Eigenvalue Max Flux Max Flux Max Fuel Max Fuel Inners
It. # value Delta Delta Location(r,g) Delta Location(r,g) Cnvrged
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 1.40025 3.881E-02 1.736E-01 ( 6, 1) 2.901E-01 ( 2691, 0) F
2 1.40029 2.863E-05 1.205E-01 ( 6, 1) 1.170E-02 ( 4504, 0) F
3 1.40029 1.335E-07 1.682E-03 ( 3, 1) 1.020E-03 ( 4333, 0) F
If you watch long enough, you'll see the depletion summaries:
ORIGEN Substep Convergence Summary
Step: 2
Begin-of-Step Time: 75 days
End-of-Step Time: 100 days
We asked for 3 steps, so this indicates we're almost done. When a calculation is done, the tempdir.path file will be deleted.
Let's use this space to methodically enumerate and tackle issues running the collection models.