econ-ark / DARKolo

fusion projects combining of Dolo/DolARK and Econ-ARK
4 stars 8 forks source link

BufferStock: is unemployment part of the model? #1

Open sbenthall opened 4 years ago

sbenthall commented 4 years ago

@llorracc I'm confused about the parameterization of the buffer stock model here.

The original paper, and the REMARK, both explicitly model unemployment shocks.

This chimera notebook says "The unemployment (zero-income event) shocks are turned off". This is consistent with the equations given but not consistent with the parameters given in the first table or in the HARK model code, which still have the πš„πš—πšŽπš–πš™π™Ώπš›πš‹ and π™Έπš—πšŒπš„πš—πšŽπš–πš™ variables.

There is also a note that "An explicit liqudity constraint is added (ct≀mtct≀mt); that is, the consumer is prohibited from borrowing". But I don't believe this constraint is written into the equations above this line.

My reading is that the "For the purposes of this notebook..." section is meant to override what comes prior to it and was perhaps added without editing other code brought in from other sources. But I wanted to double check that I understood the intended model before making edits accordingly and bringing in simulation code.

llorracc commented 4 years ago

Good catch.

The inconsistency reflects the fact that one of the things that was not possible in dolo, and which turned out to take a very substantial amount of work in creation of dolARK, was mixing of different kinds of statistical distributions. Dolo could handle lognormal, or it could handle Poisson, but it could not handle a mixture of Poisson and lognormal. At the time I created the BufferStockTheory notebook you are looking at, the comparator to my code was just dolo, so had to adapt the model from its original description to something that could be solved by the existing (dolo) tools -- which I did by abandoning the small risk of unemployment shocks. (This is an illustration of the tradeoff between general-purpose solution methods, like those in dolo, and the flexibility you have when you hand-craft a solution).

The downside of restricting the distribution of the shocks, mathematically, is that it means that there is some amount of income that people are perfectly certain to receive (whatever is the minimum discrete draw in each period), and the logic of the model says that you should be able to borrow against whatever is your guaranteed minimum income. But that's kind of an uncomfortable place to be, because nobody really has an absolute guarantee of a minimum income (unless they are retired and receiving Social Security benefits). So, the standard thing to do is to couple the existence of a minimum possible level of income with the assumption of a borrowing constraint, so that even if formally the model says that you're guaranteed an income of, say, $20,000 a year, you can't actually borrow, say, $200,000 against that when the interest rate is 10 percent). Implicitly the argument is that the model does not allow for the possibility of default, but default is obviously possible in reality. (And turns out to be amazingly messy to handle mathematically).

One of the things that @albop and his team have done since I made the notebook you are looking at is to figure out how to allow mixtures of distributions. I hadn't really remembered that this means that the notebook can now implement an exact correspondence with the original problem in my BufferStockTheory paper. (BTW, a subtext of the intro to that paper is a complaint about the extent to which the existing theoretical literature had imposed conditions, like explicit liquidity constraints, that were mathematically convenient in order to use off-the-shelf theorems. It's not that I doubt the existence of liquidity constraints; but my experience is that people stop thinking carefully about the problem as soon as constraints are introduced, because they mistakenly think that constraints explain everything. My point is that the logic by which low levels of wealth are associated with a high marginal propensity to consume is a general proposition, not just one that results from the imposition of liquidity constraints.

sbenthall commented 4 years ago

Thanks for this.

So as far as my task goes, I'll bring the simulation code from the REMARK into this notebook assuming that the model is the same as in the REMARK. Please let me know if that is incorrect. https://github.com/econ-ark/HARK/issues/414

I'm a little confused about what you're saying with respect to minimum discrete draws. My understanding is that the lognormal shocks are multipliers ranging from zero to positive infinity, and so make the income at any period arbitrarily close to 0 with some probability. By "discrete draws", do you mean the lowest value in the discretized approximaton of the lognormal function? I'm surprised that that would be in an issue. (Indeed, I though the possibility of a decimate income due to permanent shock was a quite clever way of pulling the possibility of unemployment in without discretized state.) Maybe I'm misunderstanding something.

I'll risk muddying the topic by mentioning one other question that's come up for me. It regards on of the simplifying assumptions made in your lecture (not sure if it's also in the paper): http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/#x1-4003r13

I wonder if this assumption, or a similar one, might be better motivated than the lecture notes admit. A big topic in my research orbit is the role of "automation" on work; I believe the economist's answer is that "automation" is really the increase of labor productivity for the few, resulting in less demand for labor, resulting in unemployment for the rest. Assuming a static demand for productive labor, maybe having income for the employed grow with the chance of unemployment makes sense, since the chance of unemployment would then be tied to labor productivity due to technology improvements. I wonder if this is in the literature (maybe it's already in your paper--I haven't been able to read it all yet).

llorracc commented 4 years ago

My understanding is that the lognormal shocks are multipliers ranging from zero to positive infinity, and so make the income at any period arbitrarily close to 0 with some probability. By "discrete draws", do you mean the lowest value in the discretized approximaton of the lognormal function?

Exactly. Any discrete approximation will have a lowest point. Even if the lowest point is, say, 20 percent of annual income, that means that if the interest rate is zero, the minimum possible discounted value of income (assuming an interest rate of zero) is

0.2 + 0.20.2 +0.20.2*0.2 +...

which, with an infinite stream of such realizations, ends up being (1/(1-0.2))=1.25 so it says that people can borrow up to a year and a half of income. That is a huge difference (in practical terms; comparing to empirical data) to being able to borrow nothing.

So, this is exactly a case where almost any discrete approximation to the continuous lognormal will be problematic.

On your other point:

"I'll risk muddying the topic by mentioning one other question that's come up for me. It regards on of the simplifying assumptions made in your lecture (not sure if it's also in the paper):"

(the paper's assumption is much milder: your unemployment spell is expected to last only one period -- say, a year -- not your entire life)

http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/#x1-4003r13I wonder if this assumption, or a similar one, might be better motivated than the lecture notes admit. A big topic in my research orbit is the role of "automation" on work; I believe the economist's answer is that "automation" is really the increase of labor productivity for the few, resulting in less demand for labor, resulting in unemployment for the rest. Assuming a static demand for productive labor, maybe having income for the employed grow with the chance of unemployment makes sense, since the chance of unemployment would then be tied to labor productivity due to technology improvements. I wonder if this is in the literature (maybe it's already in your paper--I haven't been able to read it all yet).

Many economists are skeptical of the "robots will take our jobs" narrative, since similar narratives have occurred repeatedly in the past. ("Horses will take our jobs", said the infantry of the iron age; the waterwheel will take millworkers' jobs said medieval millworkers; everyone knows about the Luddites (who, actually, were right); the steam engine and railroad and interstate highway and telephone and newspaper and ...). In the end, the outcome was just that people ended up in different jobs -- like research software engineer (though the transition periods have admittedly been difficult sometimes, and for some people).

In any case, there's no disputing the proposition that the risk is greater for people at different ages, with different education levels, etc. It's not a computational challenge, now, to solve models that are much more closely calibrated to granular microeconomic data, so for expositional purposes it is wiser to not fight on the realism of the assumption, and instead to say "basically the qualitative insights from the more realistic models are similar, but much harder to teach because the computational solutions are much more of a black box."

On Sun, Oct 27, 2019 at 9:39 PM Sebastian Benthall notifications@github.com wrote:

Thanks for this.

So as far as my task goes, I'll bring the simulation code from the REMARK into this notebook assuming that the model is the same as in the REMARK. Please let me know if that is incorrect. econ-ark/HARK#414 https://github.com/econ-ark/HARK/issues/414

I'm a little confused about what you're saying with respect to minimum discrete draws. My understanding is that the lognormal shocks are multipliers ranging from zero to positive infinity, and so make the income at any period arbitrarily close to 0 with some probability. By "discrete draws", do you mean the lowest value in the discretized approximaton of the lognormal function? I'm surprised that that would be in an issue. (Indeed, I though the possibility of a decimate income due to permanent shock was a quite clever way of pulling the possibility of unemployment in without discretized state.) Maybe I'm misunderstanding something.

I'll risk muddying the topic by mentioning one other question that's come up for me. It regards on of the simplifying assumptions made in your lecture (not sure if it's also in the paper):

http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/#x1-4003r13

I wonder if this assumption, or a similar one, might be better motivated than the lecture notes admit. A big topic in my research orbit is the role of "automation" on work; I believe the economist's answer is that "automation" is really the increase of labor productivity for the few, resulting in less demand for labor, resulting in unemployment for the rest. Assuming a static demand for productive labor, maybe having income for the employed grow with the chance of unemployment makes sense, since the chance of unemployment would then be tied to labor productivity due to technology improvements. I wonder if this is in the literature (maybe it's already in your paper--I haven't been able to read it all yet).

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/1?email_source=notifications&email_token=AAKCK7ZV6KA4TL72CHFS25LQQY7GZA5CNFSM4JFUGNUKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECLOKGY#issuecomment-546759963, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK765AIJQC35KAPHYCELQQY7GZANCNFSM4JFUGNUA .

--

sbenthall commented 4 years ago

So, this is exactly a case where almost any discrete approximation to the continuous lognormal will be problematic.

Thanks for explaining that.

It's not a computational challenge, now, to solve models that are much more closely calibrated to granular microeconomic data

the more realistic models are similar, but much harder to teach because the computational solutions are much more of a black box.

Are you referring to machine learning techniques here? Could you be more specific about which techniques are being used for these 'realistic models'?

expositional purposes

I believe I am following you. I understand and appreciate the need for teaching, exposition, and explanation in the methods encoded in this software project.

llorracc commented 4 years ago

Are you referring to machine learning techniques here? Could you be more specific about which techniques are being used for these 'realistic models'?

No, I'm talking about the standard solution methods in our toolkit (and everyone else's).

The specific result that is available in the tractable model is the analytical formula for the target level of wealth: You can see, explicitly, now it depends on risk aversion, the growth rate, the interest rate, etc.

For computational solutions, you have to just plug in some numbers and get a number out, without any ability to see, for example, the logic for why there is a nonlinear relationship between target wealth and the time preference rate.

On Mon, Oct 28, 2019 at 9:59 AM Sebastian Benthall notifications@github.com wrote:

So, this is exactly a case where almost any discrete approximation to the continuous lognormal will be problematic.

Thanks for explaining that.

It's not a computational challenge, now, to solve models that are much more closely calibrated to granular microeconomic data

the more realistic models are similar, but much harder to teach because the computational solutions are much more of a black box.

Are you referring to machine learning techniques here? Could you be more specific about which techniques are being used for these 'realistic models'?

expositional purposes

I believe I am following you. I understand and appreciate the need for teaching, exposition, and explanation in the methods encoded in this software project.

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/1?email_source=notifications&email_token=AAKCK7YOKKWFVVZK6SKYJRDQQ3V4VA5CNFSM4JFUGNUKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECM655Q#issuecomment-546959094, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK77ARY2KETNRC5F2C3TQQ3V4VANCNFSM4JFUGNUA .

--

sbenthall commented 4 years ago

Ok. I just realized this discussion relates to my next step, which is bringing in simulation material from the BufferStock REMARK into the BufferStock DARKolo chimera.

My understanding now is that there are two different kinds of solutions HARK might offer for a given model:

Coming from my background, I would call "computational solutions" either "numerical solutions" or "simulations". I want to confirm these meanings.

If so, then I'm wondering what part of the BufferStockTheory REMARK is simulation. I see what looks like a lot of closed-form solutions and plots based on them. Am I missing something?

In contrast, it looks like the dolo implementation of the BufferStockTheory model is running a simulation, in order to find a computational solution, as the time_iteration method runs for some time outputting information about the decreasing error rate of its search.

I don't mean to belabor these points but my sense is getting synced on these terms is critical for work in computational methods.

llorracc commented 4 years ago

On Fri, Nov 1, 2019 at 1:34 PM Sebastian Benthall notifications@github.com http://mailto:notifications@github.com wrote:

Ok. I just realized this discussion relates to my next step

https://github.com/econ-ark/HARK/issues/414, which is bringing in simulation material from the BufferStock REMARK into the BufferStock DARKolo chimera.

My understanding now is that there are two different kinds of solutions HARK might offer for a given model:

  • Closed-form solutions. Mathematically proven from the model assumptions. Better for exposition and teaching because the relationships between the variables are explicit.

Right. Like a consumption function

[image: C = \left(r - \rho^{-1}(r-\theta)\right)O]

where [image: o] is overall wealth, [image: r] is the interest rate, [image: \rho^{-1}>0] is a preference parameter, and [image: \theta] is the time preference rate. So, you can see transparently that, for example, that if [image: r > \theta] then a person with a higher [image: \rho^{-1}] will consume less because [image: -\rho^{-1}(r-\theta)] is a larger negative number when [image: \rho^{-1}] is larger.

  • Computational solutions. Generated computationally from the model specification. May involved stochastic and approximate algorithms. Can solve for more realistic or complex models (for which a closed form solution is undiscovered or impossible). Less good for exposition.

Coming from my background, I would call "computational solutions" either "numerical solutions" or "simulations". I want to confirm these meanings.

Yes, all of the above is correct. But there are very few cases where there are closed form solutions when uncertainty is present. (The β€œTractableBufferStock” model is almost the only example, and even there the closed-form solution is only for the target level of wealth, not β€” for example β€” for consumption as a function of wealth).

If so, then I'm wondering what part of the BufferStockTheory REMARK is

simulation. I see what looks like a lot of closed-form solutions and plots based on them. Am I missing something?

Most of what’s in BufferStockTheory is mathematical proofs of propositions like β€œa target level of wealth will exist if a certain condition on parameter values holds true.” But the actual numerical value of the target is something that can be obtained only by simulation/computational solution. The tools for obtaining the computational solution have been around for a long time, but the underlying theory for when it will work or fail is the main contribution of the BufferStockTheory paper.

In contrast, it looks like the dolo implementation of the BufferStockTheory

model is running a simulation, in order to find a computational solution, as the time_iteration method runs for some time outputting information about the decreasing error rate of its search.

I don't mean to belabor these points but my sense is getting synced on these terms is critical for work in computational methods.

We can discuss more this PM. You’re on the right track.

β€”

You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/1?email_source=notifications&email_token=AAKCK73SASTRVZWHU6SK6PDQRRSDBA5CNFSM4JFUGNUKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC3T7CA#issuecomment-548880264, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK75I2XVVU4RGNSUT2YLQRRSDBANCNFSM4JFUGNUA .

--

  • Chris Carroll
sbenthall commented 4 years ago

Ok, so if the DARKolo is exploring a model that is different from the one in the paper (i.e., it doesn't have unemployment shocks), I'd like to edit the DARKolo so that its expository text more closely matches the model, to avoid confusion.

Unless I hear otherwise, I'll add this to my task list.