matthieugomez / EconPDEs.jl

Solve non-linear HJB equations.
Other
123 stars 41 forks source link

Solve Kolmogorov Forward Equation together with an HJB #8

Closed MaximilianJHuber closed 5 years ago

MaximilianJHuber commented 5 years ago

Since your implementation calculates finite differences in a fully-implicit fashion, I cannot just solve an eigenvalue problem on the generator matrix to find the stationary distribution over the state space. My thinking was that I should solve for the stationary distribution simultaneously with the HJB, since the policy function and its derivatives may appear in the KFE.

However, I am not successful in constructing a simple extension to AchdouHanLasryLionsMollModel. I am unclear about whether this can be done with EconPDEs at all, because my hand-written FD schemes for KFEs always needed some sort of normalization in every iteration.

using EconPDEs, Distributions

struct AchdouHanLasryLionsMollModel
    # income process parameters
    κy::Float64 
    ybar::Float64
    σy::Float64

    r::Float64

    # utility parameters
    ρ::Float64  
    γ::Float64

    amin::Float64
    amax::Float64 
end

function AchdouHanLasryLionsMollModel(;κy = 0.1, ybar = 1.0, σy = 0.07, r = 0.03, ρ = 0.05, γ = 2.0, amin = 0.0, amax = 500.0)
    AchdouHanLasryLionsMollModel(κy, ybar, σy, r, ρ, γ, amin, amax)
end

function initialize_state(m::AchdouHanLasryLionsMollModel; yn = 5, an = 50)
    κy = m.κy ; ybar = m.ybar ; σy = m.σy  ; ρ = m.ρ ; γ = m.γ ; amin = m.amin ; amax = m.amax

    distribution = Gamma(2 * κy * ybar / σy^2, σy^2 / (2 * κy))
    ymin = quantile(distribution, 0.001)
    ymax = quantile(distribution, 0.999)
    ys = collect(range(ymin, stop = ymax, length = yn))
    as = collect(range(amin, stop = amax, length = an))
    OrderedDict(:y => ys, :a => as)
end

function initialize_y(m::AchdouHanLasryLionsMollModel, state)

    distribution = Normal((minimum(state[:y]) + maximum(state[:y]))/2, (minimum(state[:y]) + maximum(state[:y]))/8)
    g_prep = [pdf(distribution, y) * 1 / (√a+1) for y in state[:y], a in state[:a]]

    OrderedDict(:v => [(y + m.r * a)^(1-m.γ)/(1-m.γ)/m.ρ for y in state[:y], a in state[:a]],
        :g => g_prep/sum(g_prep))
end

function (m::AchdouHanLasryLionsMollModel)(state, value)
    κy = m.κy ; σy = m.σy ; ybar = m.ybar ; r = m.r ; ρ = m.ρ ; γ = m.γ ; amin = m.amin ; amax = m.amax
    y, a = state.y, state.a

    ymin = minimum(state[:y]); ymax = maximum(state[:y])

    v, vy, va, vyy, vya, vaa = value.v, value.vy, value.va, value.vyy, value.vya, value.vaa
    g, gy, ga, gyy, gya, gaa = value.g, value.gy, value.ga, value.gyy, value.gya, value.gaa
    μy = κy * (ybar - y)
    va = max(va, eps())

    # There is no second derivative at 0 so just specify first order derivative
    c = va^(-1 / γ)
    μa = y + r * a - c

    if (a ≈ amin) && (μa <= 0.0)
        va = (y + r * amin)^(-γ)
        c = y + r * amin
        μa = 0.0
    end

    # this branch is unnecessary when individuals dissave at the top (default)
    if (a ≈ amax) && (μa >= 0.0)
        va = (((ρ - r) / γ + r) * a)^(-γ)
        c = ((ρ - r) / γ + r) * a
        μa = y + (r - ρ) / γ * a
    end

    #     - ∂μa/∂a * g                              - μa*∂g/∂a -∂μy/∂y*g -μy*∂g/∂y + 1/2 *σy^2 *∂^2g/∂y^2
    gt = - (r - (-1 / γ) * va^(-1 / γ - 1) * vaa) * g - μa * ga + κy * g - μy * gy + 1/2 * σy^2 * gyy
    vt = c^(1 - γ) / (1 - γ) + va * μa + vy * μy + 0.5 * vyy * σy^2 - ρ * v
    return (vt, gt), (μy, μa), (v = v, c = c, va = va, vy = vy, y = y, a = a, μa = μa, vaa = vaa)
end

m = AchdouHanLasryLionsMollModel()
state = initialize_state(m)
y0 = initialize_y(m, state)
y, result, distance = pdesolve(m, state, y0, iterations = 20)

Basically, I added the distribution g to the solution object and added its law of motion, although I am not sure about the boundary condition on the a axis, but I tried different versions. The solution iteration does not converge and the result is odd in any case.

MaximilianJHuber commented 5 years ago

On second thought, I could use ForwardDiff to calculate the differential operator and proceed with the eigenvalue problem, as non-fully-implicit methods do.

matthieugomez commented 5 years ago

I think you should use EconPDEs to solve for the problem first. Once you have the drift/volatility of the asset, you can use use linear operators to solve for the stationary distribution.

I have a new package InfinitesimalGenerators.jl to do just that, however it only works with one dimensional markov processes for now.

MaximilianJHuber commented 5 years ago

Cool, thanks for your work!

I know most econ papers solve both the HJB and the stationary distribution by this non-fully-implicit method, where the derivatives of the drift and volatility with respect to state variables are ignored, which EconPDEs does not. Do you think there is any use for a similar method for calculating the stationary distribution, i.e. auto-differential through EconPDEs hjb! function to (almost) get the generator which is aware of those derivatives?

jlperla commented 5 years ago

@MaximilianJHuber Depending on how you are doing it, you can jointly solve the HJB and KFE together adding in the normalization of the distribution to the system of equations you are solving.

matthieugomez commented 5 years ago

@MaximilianJHuber As you say, EconPDes solves non linear PDE of the form 0 = f(V) + f(∂V) + f(∂V)∂∂V, whereas Moll et al. solve quasi-linear PDEs of the form 0 = f(V) + m * ∂V + s * ∂∂V

In any case, the stationary distribution is always given by a linear PDE 0 = μ∂g + 0.5* σ^2/2∂∂g. So tools in EconPDEs are not particularly helpful to solve this kind of PDE. It is much better to use the linear methods from Moll et al.

Now, in non-stationary models (i.e. models with a time dimension t) it can be useful to solve alternatively the HJB and the KFE equation (I think that's what @jlperla is referring to). But for now this package focuses on stationary models.

MaximilianJHuber commented 5 years ago

@jlperla thanks for your input! I will try to alter the helper! function and make the gs sum to one.

@matthieugomez I think I understand you point: That approximate generator from the quais-linear methods is the exactly correct generator from the KBE, and hence its transpose is the exactly correct generator from the KFE. Which means my idea of auto-differentiating hjb! is not good.