Closed zasddsgg closed 6 months ago
Hello, could I trouble you take some time to take a look at this problem. I am still not clear about how to use uncertainty analysis in BioSTEAM.
Hello, I have broken down the questions about uncertainty analysis, could I trouble you take some time to take a look at this problem. Thanks for your help again.
1e-6
(or any other tiny numbers) at the start, then see if adding it will solve the problem if you run into errors.Thank you for your answer. May I ask you the following questions. Thanks for your help. Wish you a good day.
a) What I want to express is that, code for HXN (such as HXN = bst.HeatExchangerNetwork('HXN', T_min_app = 5.)
), TEA(TEA.solve_price(stream)
), LCA(system.get_net_impact(key=GWP)
) and system.simulate()
, where is the above codes included in uncertainty analysis and global sensitivity analysis code, before uncertainty analysis and global sensitivity analysis code or after uncertainty analysis and global sensitivity analysis code?
b) If the number of Monte Carlo simulations is 1000, should HXN be executed each time?
c) Control output, refers to how do input parameters, such as loading, inoculum ratio, residence time, acidulation time
affect yield? For “But you need to set up the algorithms to calculate the outputs before the analysis.”
, could you give an example?
d) It’s M301.enzyme loading=20
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L339, while it’s 0.02 in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/cellulosic/systems/fermentation/saccharification.py#L46? Shouldn't order of magnitude of the upper and lower limits of the parameters correspond to the baseline? For “T201.sulfuric_acid_loading_per_dry_mass = loading / 1000”
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L316), when divide it by 1000, it doesn't correspond to the order of magnitude in the base scenario.
e) For question 9, can I replace the original code in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L238-L240 with the following code? Is the following code correct?
U101 = SSCF.U101
D = baseline_uniform(2205, 0.1)
@param(name='Feedstock flow rate', element=feedstock, kind='coupled', units='dry-ton/day', baseline=2205, distribution=D)
def set_feedstock_flow_rate(rate):
feedstock.mass =rate
f) For question 10, can I replace the original code in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L251-L258 with the following code? Is the following code correct?
D = baseline_triangle(1, 0.25)
@param(name='TCI ratio', element='TEA', kind='isolated', units='% of baseline',
baseline=1, distribution=D)
def set_TCI_ratio(new_ratio):
for unit in lactic_sys.units:
if hasattr(unit, 'cost_items'):
for item in unit.cost_items:
unit.cost_items[item].cost *= new_ratio
Besides, does the TCI ratio=1
need to be set in the baseline process? If so, how do I set it up?
g) For question 11, does your reply refer to no need to add code in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L224-L225?
h) For the top questions 7, 8 and 12, could I trouble you to look at them?
a. That HXN code is to create the HXN unit, you don't need it in uncertainty/sensitivity analyses. Those two lines of TEA and LCA codes are included as metrics in the model, the simulate line is embedded in the model evaluation function.
b. Again, whether you want to include heat integration or not is your choice. If you don't want heat integration, then you don't include HXN in your system. If you want to include heat integration, then include HXN in your system and it will be resimulated. If the model you create include parameters that affect mass/energy.
c. See here.
d. Those modules use different units. The lactic module uses its own pretreatment, fermentation, etc. units. E.g., the lactic module divide the enzyme loading by 1000 in its unit:
https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/1178c33f87d247f2d6fb1852d6f8465ce9a6513f/biorefineries/lactic/_units.py#L128
e. You can.
f. No, because you change the default value in each simulation. E.g., say you start with a default cost ratio of 1, your first sample is 0.9 and second is 0.8, then your first simulation uses a cost ratio of 10.9, but your second simulation uses a cost ratio of 10.9*0.8 = 0.72, while you want it to be 0.8.
g. Yes, you don't need that code.
h. I replied to 7 and 8. Biogenic CO2 doesn't count, only natural gas and ethanol are from fossil sources. For 12, simulation of the system and calculation of the metrics (i.e., what simulate_get_MPSP
does) is usually included in the model evaluation function. I wrote those functions because I also want to calculate MPSP in baseline analysis outside of uncertainty/sensitivity analyses.
Thanks for your answer.
a) Does the lactic acid system take HXN into account? Is HXN considered in uncertainty analysis and sensitivity analysis in lactic acid?
b) If I want to consider HXN in system, in the uncertainty analysis and sensitivity analysis, what I do is to create an HXN unit in the system through the HXN code HXN = bst.HeatExchangerNetwork('HXN', T_min_app = 5.)
, right? There is no need to add HXN-related code in uncertainty analysis and sensitivity analysis, and if there is a re-simulate of the system in uncertainty analysis and sensitivity analysis, HXN will be run, right?
So the logic is that I only need to create a system first (including HXN, i.e. code HXN = bst.HeatExchangerNetwork('HXN', T_min_app = 5.)
) and then I don't need to add the system simulation (i.e. lactic_sys.simulate()
), TEA, and LCA code in the system I created. Instead of running the baseline scenario in system code, I need to add uncertainty analysis and sensitivity analysis code (including code for system simulation (i.e. lactic_sys.simulate()
), TEA and LCA) after the system I created. After running uncertainty analysis and sensitivity analysis, the results of the baseline scenario (including TEA and LCA of base scenario), as well as a variety of other scenarios is included in the result of uncertainty analysis and sensitivity analysis. I can look at the results of the base scenario (including TEA and LCA of base scenario) in the results of the uncertainty analysis and sensitivity analysis. Is my understanding correct?
c) I don't need to add the code in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L45 and https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/analyses/1_uncertainties.py#L38, right?
d) For “refers to how do input parameters, such as loading, inoculum ratio, residence time, acidulation time affect yield”
, so instead of controlling the output of the yield, I get the output through the operation of the system itself, right?
e) But for example, loading, inoculum ratio, residence time, acidulation time
changed, which seems to have an impact on the back-end yield. Is it assumed that when loading, inoculum ratio, residence time, acidulation time
changes, the back-end yield and product distribution remain unchanged?
f) For above “f”, so when I run a Monte Carlo simulation, is the new simulation based on the parameters of the previous simulation (for example, the eighth simulation is based on the parameters of the seventh simulation)? For example, TCI_ratio
?
g) Where did the ethanol stream come from? I looked at the flow chart, and there was no ethanol in the inlet of the boiler? In addition, there seem to be many other chemicals in the boiler inlet stream, are the CO2 emissions from their combustion not taken into account (i.e. the GWP of the boiler emission stream)? It doesn't look like it's all biological carbon. Why does get_other_materials_GWP (https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L181-L182) include the burning of other materials, it seems to only include the production of other materials? The code I use is as follows:
from biorefineries import lactic as la
import biosteam as bst
la.load(kind='SSCF', print_results=True)
la.lactic_sys.diagram(format='png')
a=la.BT.ins[0]
a.imass.show
a. Yes and yes. b. Yes. c. Correct, you don't need to add. d. Yes. e. Yes. f. Yes. g. Ethanol is used for esterification. If you think certain emissions are missing, you can add them.
Last, for e, it is assumed that when loading, inoculum ratio, residence time, acidulation time changes, the back-end yield and product distribution remain unchanged, right?
Hello, I still have some questions after learning the materials about uncertainty analysis related to BioSTEAM. May I ask you the following questions? Thanks for your help. Wish you a good day.
Do I need to perform HXN when performing global sensitivity analysis and uncertainty analysis? For example, do I need to execute HXN every time when run 1000 simulations of Monte Carlo?
Do I need to set the output corresponding to the input parameters in advance when doing uncertainty analysis? For example, in table S5 of paper “Sustainable Lactic Acid Production from Lignocellulosic Biomass”, do I need to control the output corresponding to the input parameters? Or I only need to change the input parameters (that is, the output is not controlled manually)?
For
M301.enzyme_loading = loading
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L343), why notM301.enzyme loading = loading/1000
? It’sM301.enzyme_loading = 0.02
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/cellulosic/systems/fermentation/saccharification.py? Why divided by 1,000 inT201.sulfuric_acid_loading_per_dry_mass = loading / 1000
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L316)?Are the baseline values in the uniform and triangular distributions the values of the parameters in the original process (before performing uncertainty analysis and global sensitivity analysis)?
Is the order of magnitude of the variable the same as the corresponding variable in the original process? For example,
shape.Triangle(5, 10, 15)
ofR301.CSL_loading
indicates that theCSL_loading
of original process is 10, not 0.001 (https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L358-L363)?For
1e-6 is to avoid generating tiny negative flow (e.g., 1e-14)
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L366), when do I add1e-6
? Why didn't1e-6
be added to the remaining units or streams, such asR301.CSL_loading
? ForD = shape.Triangle(0.75, 0.9, 0.948-1e-6)
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L352), why is 1e-6 also used there? When will 1e-6 be added? Formin(acetic_yield, 1-1e-6-R301_X[0]-R301_X[2])
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L393), why also use-1e-6
?For
get_onsite_GWP
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/__init__.py#L176-L177, why only natural gas and ethanol? It seems that the rest of the chemical combustion also releases CO2? Where does the ethanol come from (there is no ethanol in the boiler feed stream)? Why not consider the CO2 emission from the boiler outlet stream (emission stream)?For
get_other_materials_GWP
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L181-L182), why does it include the burning of other materials?Why use
cached_flow_rate
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L238-L240. Why don't we just usedef set_feedstock_flow_rate(rate): feedstock.mass= rate
? Why usefeedstock.mass *= rate / U101._cached_flow_rate, U101._cached_flow_rate = rate
? It's the same feedstock stream, why isn'tcached
used in the tutorial (https://biosteam.readthedocs.io/en/latest/tutorial/Uncertainty_and_sensitivity.html#:~:text=lb%20%3D%20feedstock.F_mass%20*%200.9%0Aub%20%3D%20feedstock.F_mass%20*%201.1%0A%40model.parameter(element%3Dfeedstock%2C%20kind%3D%27coupled%27%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20distribution%3Dshape.Uniform(lb%2C%20ub))%0Adef%20set_crushing_capacity(capacity)%3A%0A%20%20%20%20feedstock.F_mass%20%3D%20capacity)? When to usecached
indef()
?For
def set_TCI_ratio(new_ratio):
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L251-L258, why don't we just usedef set_TCI_ratio(new_ratio):, for unit in lactic_sys.units:, if hasattr(unit, 'cost_items'):, for item in unit.cost_items:, unit.cost_items[item].cost *= new_ratio
? Why use the code in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L251-L258?For
A fake parameter serving as a "blank" in sensitivity analysis to capture fluctuations due to converging errors
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L224-L225, should I add it to my own process? Just the same code? Do I need to change the code? Do I need to add any more codes?For
lactic_sys.simulate() # need this to initialize some settings
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L45, do I also need to do this in my own process, that is, run the process first, and then perform uncertainty analysis and global sensitivity analysis? Forsimulate_get_MPSP()
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/analyses/1_uncertainties.py#L44, do I need to simulate MPSP first and then carry out uncertainty analysis? Shouldn't LCA be run before performing uncertainty and global sensitivity analysis?Cost_coefficients.n is defined in https://biosteam.readthedocs.io/en/latest/tutorial/Uncertainty_and_sensitivity.html#Monte-Carlo:~:text=mid%20%3D%20reactors_cost_coefficients.n, but when call the model, base cost 10000 is input (https://biosteam.readthedocs.io/en/latest/tutorial/Uncertainty_and_sensitivity.html#Monte-Carlo:~:text=model(%5B8%2C%20100000%2C%200.040%2C%200.85%2C%20feedstock.F_mass%5D))? The input 100000 and 0.04 (https://biosteam.readthedocs.io/en/latest/tutorial/Uncertainty_and_sensitivity.html#Monte-Carlo:~:text=model(%5B8%2C%20100000%2C%200.040%2C%200.85%2C%20feedstock.F_mass%5D)%20%23%20Returns%20metrics%20(IRR%20and%20utility%20cost)) exceed the lb and ub set earlier in https://biosteam.readthedocs.io/en/latest/tutorial/Uncertainty_and_sensitivity.html#Monte-Carlo:~:text=df_dct%20%3D%20model.get_distribution_summary()%0Adf_dct%5B%27Uniform%27%5D, is that OK?
It’s
tau_saccharification = 60
in https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/cellulosic/units.py, but the baseline of tau_saccharification in lactic acid models.py was 24 (https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py#L346)?For
Esterification conversion,D = baseline_triangle(1, 0.1), for Hydrolysis conversion, D = baseline_triangle(0.8, 0.1)
, forboiler efficiency,D = baseline_uniform(0.8, 0.1)
(https://github.com/BioSTEAMDevelopmentGroup/Bioindustrial-Park/blob/master/biorefineries/lactic/models.py), but they doesn’t match Table S5 in article “Sustainable Lactic Acid Production from Lignocellulosic Biomass”?