Closed jiweiqi closed 4 years ago
The CTI file is generated from the corresponding gri30.inp
file. In our repository, the .inp
file has the date GRI-Mech Version 3.0 3/12/99
, whereas the online version is GRI-Mech Version 3.0 7/30/99
. So, it is not surprising that there is some difference. The question remains what, if anything, should be done about this. Do these reactions change some results in some cases?
The effect is indeed dependent on the applications. Since gri30 is frequently involved in various demonstrations, in my opinion, it might be better to keep it consistent with the latest version.
Did you just happen to notice this, or was the difference important for an application you were using? I'm asking because if we silently change the mechanism (even with an announcement in the release notes) and it will change people's results, we should be aware of that and inform everyone.
I just happen to notice this. I am doing some uncertainty quantification work, in which I have to manually modify the .cti files for other codes. When verifying the pipeline, I noticed this difference.
For now, I do not know the situation where this could make a difference.
I guess, do some tests on ignition delay time and laminar flame speed calculations will figure out the difference.
I agree that cantera should be consistent with the latest online version. Regarding potentially different results, I am not sure whether it is extremely relevant, as I do not believe that GRI3.0 is necessarily state of the art any longer (rather: a historic benchmark).
One thing that I wanted to do as part of #653 (updating CODATA and element weights) was run a benchmark suite of common calculations (flame speeds, ignition delays, maybe some echem calculations) to see what differences were generated and by how much for these common global parameters. I never got around to it, mostly because 1) I wasn't sure which metrics would be useful and 2) I'd have to write scripts to run all of that. Nonetheless, I think something similar would be useful in this case...
... run a benchmark suite of common calculations (flame speeds, ignition delays, maybe some echem calculations) to see what differences were generated and by how much for these common global parameters.
I think it's a reasonable suggestion, but imho the important thing is to have unit tests in place that prevent major surprises that are difficult to troubleshoot. That said, I believe this deserves a separate issue :wink:
Hi,
It seems that the gri30 mechanism shipped by Cantera is different from the official one at
gri30
in terms of reaction 296/297. The pre-factor A have been multiplied by two in Cantera version. Any idea?
https://github.com/Cantera/cantera/blob/265a1860cc6d7e2f7dbdbe2128aac585d85c9f7d/data/inputs/gri30.cti#L2016-L2021