odow / SDDP.jl

A JuMP extension for Stochastic Dual Dynamic Programming
https://sddp.dev
Other
293 stars 62 forks source link

Question Regarding State-Dependent Paramether Updates in SDDP.jl #733

Closed mcwaga closed 6 months ago

mcwaga commented 7 months ago

Hello SDDP.jl Community,

I am currently delving into the potential of using SDDP.jl for a somewhat complex stochastic dynamic programming problem, particularly an infinite horizon model. My model requires updating a parameter K according to different rules based on the current economic state, with the specifics as follows:

During a recession, the update rule is K' = exp(a1) + K^(b1). In a boom period, it changes to K' = exp(a2) + K^(b2).

A critical aspect of my approach is that if I have the full path of nodes (representing economic states) visited up to a given point, I can update K at any future stage using only this path and the initial value of K. This path-dependent nature means that knowing the sequence of states (boom or recession) is sufficient for projecting K into the future, without needing to know last stage K.

I'm reaching out to ask if SDDP.jl supports or can accommodate such state-dependent parameter update mechanisms, particularly where the choice of update rule requires knowledge of the path taken through the scenario tree. I'm not entirely sure if this is feasible or if I'm approaching the problem correctly within the context of SDDP.jl.

Thank you very much for your time and for supporting the SDDP.jl project. I'm looking forward to any insights you may have.

It is worth mentioning that another aspect of my model is the inclusion of other random factors, such as employment status, which can either be 0 (unemployed) or 1 (employed), adding another layer of stochasticity to the problem.

Best,

Mateus

odow commented 7 months ago

Can you model the boom/recession process by a Markov chain?

If so, build a Markovian policy graph (with cycle if infinite horizon): https://sddp.dev/stable/tutorial/markov_uncertainty/

See also: https://onlinelibrary.wiley.com/doi/10.1002/net.21932

mcwaga commented 6 months ago

Sorry for the delay in responding. The Markov chain approach does not seem to work for me, since there is the boom/recession process plus the employed/unemployed process. Also, since I am trying to solve the Krusell Smith (https://www.journals.uchicago.edu/doi/abs/10.1086/250034) problem with SDDP, I would like to be as close as possible to their method, which includes the log update...

odow commented 6 months ago

I don't have access to that paper, unfortunately.

since there is the boom/recession process plus the employed/unemployed process.

You can have both a Markovian process for book/recession, and a stagewise-independent process for employment.

But hard to say without a proper formulation of the problem.