econ-ark / DARKolo

fusion projects combining of Dolo/DolARK and Econ-ARK
4 stars 8 forks source link

BufferStock chimera improvements #5

Closed sbenthall closed 4 years ago

sbenthall commented 4 years ago

Improvements to the BufferStock DARKolo:

This completes the BufferStock section of this issue: https://github.com/econ-ark/HARK/issues/414

sbenthall commented 4 years ago

I have some questions about how the dolo model gets the transition equation from the model assumptions, specifically about the meaning of the exp(tran) element, which I gather is meant to be $\theta_t$.

albop commented 4 years ago

What are the questions ?

sbenthall commented 4 years ago

The "Upper and Lower Limits of the Marginal Propensity to Consume" part of the BufferStock REMARK depends on some Jupyter extensions to render the LaTeX.

I've been operating under the directive that we are to be building these DARKolo chimeras for Jupyter Lab.

I don't yet know how to do the LaTeX rendering in Jupyter Lab. It seems to have a different set of extensions and different method of extension management. It may be that this functionality is not yet supported.

sbenthall commented 4 years ago

@albop My understanding is that the transition equation is derived from the conditions on the Bellman form of the model.

But I don't understand why exp(tran), which is e^1, or the Euler number, can be substituted for $\theta_t$.

albop commented 4 years ago

because the value given in the calibration for tran and perm is actually used only when computing the residual of the equations (but that is nonsensical and should be fixd...). When the model is soled, tran and perm are both normally distributed with zero-mean as specified in the exogenous block. So if tran is normal, exp(tran) is lognormal, distributed around 1 (well 1 is not exactly the mean, but it's a second order issue here).

sbenthall commented 4 years ago

@albop Is there a place in the dolo documentation where I could have looked to find this out for myself?

but that is nonsensical and should be fixd...

Do I understand correctly from this that the design here is still in flux?

When the model is soled, tran and perm are both normally distributed with zero-mean as specified in the exogenous block.

Is this determined by an implicit relationship between \sigma_tran and tran? While this is very concise, I would find it much more intuitive if this relationship between the tran and \sigma_tran were explicit in the markup.

albop commented 4 years ago

Not really. Here the unexpected behavior comes from the fact that values in calibration for all exogenous shocks are ignored for all practical purposes. The improvement here would to automatically use values from the exogenous process when checking steady-state rather than values in the calibration section which are not necessarily in sync. It's not so clear-cut though as some exogenous processes don't have a clear default value. As for sigma_tran it is just an independent scalar defined in the calibration section. As all such values it can be used to define the exogenous process

llorracc commented 4 years ago

I didn't think about the fact that the MPC figure in BST requires a full native install of latex on the host machine. That is unavoidable for the BST REMARK, because it is actually the source from which the figures for the paper are generated, and the figure needs to have the same notation as the paper. (That notation includes the \underline command that is part of the AMS- set of packages which are not natively supported by matplotlib).

For the chimera, you should just change the labels to avoid use of \underline and anything else not supported by out-of-the-box matplotlib. The other offender I can think of is my beloved $\Thorn$ character; you can replace it with $\Phi$.

On Tue, Nov 5, 2019 at 5:07 PM Sebastian Benthall notifications@github.com wrote:

The "Upper and Lower Limits of the Marginal Propensity to Consume" part of the BufferStock REMARK depends on some Jupyter extensions to render the LaTeX.

I've been operating under the directive that we are to be building these DARKolo chimeras for Jupyter Lab.

I don't yet know how to do the LaTeX rendering in Jupyter Lab. It seems to have a different set of extensions and different method of extension management. It may be that this functionality is not yet supported.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK72TFXNGUQR5PSTERPTQSHVCHA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDEQF5A#issuecomment-550044404, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK77EJS7DFVTTC6Q2RPTQSHVCHANCNFSM4JJH46CA .

--

sbenthall commented 4 years ago

@llorracc "Upper and Lower Limits of the Marginal Propensity to Consume" from the REMARK is now in the DARKolo chimera in master. I had to make some changes as you say to get the LaTeX rendering to work.

Also, the graph is not showing the same results as in the REMARK, though I tried to do nothing substantive to it, just replace baseEx_inf with model_HARK. You might be able to tell quicker than I what might be going wrong.

llorracc commented 4 years ago

Seb,

Looking at your changes, I realized that I had forgotten to push to remote some changes that I made with Pablo et al last week. He suggested that while we are working on chimeras, we keep the HARK and dolo files separate from each other, and only merge them together when we are finished, so that it is easy for different people to work on HARK vs dolo stuff without interfering.

I've incorporated your changes into this scheme and pushed the results. Also made some edits -- the lower bound of the figure as you computed it was right, it didn't match the BufferStockTheory REMARK figure because the model here is different in having a liquidity constraint.

But, there is one key thing remaining for you to do: As with the plot of the consumption function, I would like a plot of the comparison of the MPC between the HARK model and dolo. You may need to ask Pablo's help in figuring out how to compute the MPC in dolo.

You should also look into improving the figure, which has some wiggles at the bottom that undoubtedly reflect the choices of gridpoints. Solving the model with a denser grid would almost certainly take care of that.

On Thu, Nov 7, 2019 at 1:19 PM Sebastian Benthall notifications@github.com wrote:

@llorracc https://github.com/llorracc "Upper and Lower Limits of the Marginal Propensity to Consume" from the REMARK is now in the DARKolo chimera in master. I had to make some changes as you say to get the LaTeX rendering to work.

Also, the graph is not showing the same results as in the REMARK, though I tried to do nothing substantive to it, just replace baseEx_inf with model_HARK. You might be able to tell quicker than I what might be going wrong.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK75J3O22JAWPQWB6KFDQSRL3PA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDNKUSI#issuecomment-551201353, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK74SCQR3XYLM5GZRPUTQSRL3PANCNFSM4JJH46CA .

--

sbenthall commented 4 years ago

Also made some edits -- the lower bound of the figure as you computed it was right, it didn't match the BufferStockTheory REMARK figure because the model here is different in having a liquidity constraint.

Did you mean the upper bound? I see the overline{kappa} line is missing from the MPC figure.

llorracc commented 4 years ago

Upper bound (vertical) as horizontal approaches lower bound

On Fri, Nov 8, 2019 at 20:19 Sebastian Benthall notifications@github.com wrote:

Also made some edits -- the lower bound of the figure as you computed it was right, it didn't match the BufferStockTheory REMARK figure because the model here is different in having a liquidity constraint.

Did you mean the upper bound? I see the overline{kappa} line is missing from the MPC figure.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK72UTMNF35D4TCF6OB3QSYFZ3A5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDTZ56Q#issuecomment-552050426, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK72VIENA3MAH3KNQ7ALQSYFZ3ANCNFSM4JJH46CA .

-- Sent from Gmail Mobile

sbenthall commented 4 years ago

You should also look into improving the figure, which has some wiggles at the bottom that undoubtedly reflect the choices of gridpoints. Solving the model with a denser grid would almost certainly take care of that.

I see the wiggles. Near x = 1, y = 0.4. They are still there even when the solver is ran with:

model_HARK.aXtraCount == 5000

Is there another way to change the grid density? Do you have any other theories on what might be causing the wiggles?

llorracc commented 4 years ago

It needs more points at the bottom. I think the extra points go at the top. You’d need to look at the code.

On Fri, Nov 8, 2019 at 20:48 Sebastian Benthall notifications@github.com wrote:

You should also look into improving the figure, which has some wiggles at the bottom that undoubtedly reflect the choices of gridpoints. Solving the model with a denser grid would almost certainly take care of that.

I see the wiggles. Near x = 1, y = 0.4. They are still there even when the solver is ran with:

model_HARK.aXtraCount == 5000

Is there another way to change the grid density? Do you have any other theories on what might be causing the wiggles?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK76DOF6PFIEXKV3DII3QSYJF5A5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDT2WKA#issuecomment-552053544, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK77DCI6D5BAQEFY4GKDQSYJF5ANCNFSM4JJH46CA .

-- Sent from Gmail Mobile

sbenthall commented 4 years ago

It looks like the aXtraGrid is defined as any points above the minimum values. So increasing the grid density with aXtraCount should still smooth out those wiggles

Looking into the code that sets up the grid: https://github.com/econ-ark/HARK/blob/89f24b6c70ea7fd37579195bfa07ee67ac290427/HARK/ConsumptionSaving/ConsIndShockModel.py#L2616

These are the parameters for the grid:

Parameters
    ----------
    aXtraMin:                  float
        Minimum value for the a-grid
    aXtraMax:                  float
        Maximum value for the a-grid
    aXtraCount:                 int
        Size of the a-grid
    aXtraExtra:                [float]
        Extra values for the a-grid.
    exp_nest:               int
        Level of nesting for the exponentially spaced grid

However, there's also a point where the aXtraCount gets reset not from the input parameters, but from an internally computed vector.

https://github.com/econ-ark/HARK/blob/89f24b6c70ea7fd37579195bfa07ee67ac290427/HARK/ConsumptionSaving/ConsIndShockModel.py#L1475

Is it possible that there is a bug in the underlying code? It looks like the use of aXtraCount for the grid density has been recently determined to be confusing, resulting in some code changes.

https://github.com/econ-ark/HARK/commit/d4af60e3be402028fa4b136dc85dfdb214eed362

If this is unexpected behavior, I can try to isolate the issue and ticket it up in the HARK repository to make it easier for @mnwhite to debug it. Or maybe he'd know what I've done wrong just by looking.

sbenthall commented 4 years ago

@albop For computing the MPC on the dolo model: I've been poking around. I see a couple options:

https://github.com/EconForge/dolo/blob/4bad185602fd068cc049b18ab310268807eb6f04/dolo/algos/serial_operations.py#L5

I'm not sure which you'd prefer as best use of the tools.

llorracc commented 4 years ago

If definitely prefer the derivative computation, as a way to demo that

On Fri, Nov 8, 2019 at 21:38 Sebastian Benthall notifications@github.com wrote:

@albop https://github.com/albop For computing the MPC on the dolo model: I've been poking around. I see a couple options:

-

Run tabulate with a high step count and compute the approximate derivative from there. The step count will need to be high because the x-axis of this diagram is only from 0 - 10 or so.

Try using one of the derivative computation utilities in dolo

https://github.com/EconForge/dolo/blob/4bad185602fd068cc049b18ab310268807eb6f04/dolo/algos/serial_operations.py#L5

I'm not sure which you'd prefer as best use of the tools.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK75CNU4ZVAYPYEMBE4LQSYPCZA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDT3X4Q#issuecomment-552057842, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK73QDDUU6TTCFK2TGXLQSYPCZANCNFSM4JJH46CA .

-- Sent from Gmail Mobile

sbenthall commented 4 years ago

"Taking a derivative of a function" is a good example of the kind of thing that is provided in a more robust and generic way by lower-level scientific computing libraries.

This is the scipy generic numerical implementation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.derivative.html

This is about the sympy symbolic computation of the derivative: https://dev.to/erikwhiting88/calculate-derivative-functions-in-python-h58

Imagining a future where HARK and dolo are merged, we might ask "Which implementation of the derivative would we use? HARK or dolo?" I hope the answer is "Neither. It's better to use the more generic library function that's supported by a wider community".

albop commented 4 years ago

@sbenthall Let me reply first about the easiest way to compute the MPC. I'll follow up afterwards on how that could be improved in an ideal world. There are different possible definitions of marginal propensity to consume. They all depend crucially on the derivative of the consumption function c(m). The solution to the bufferstock model is a decision rule dr(s) where s is a vector which contains only cash in hand. Take a vector s0=np.array([0.2]) (for instance) a small epsilon=1e-7 and compute (dr(s0+epsilon)-dr(s0))/epsilon. The output is a vector with one element. This also works with multiple points at once. Then s must be a matrix where each line has a different value for the vector of states. You can use the same formula with s instead of s0 and get a matrix as a result. You could also import the SerialDifferentiableFunction function from dolo.numeric.something and do SerialDifferentiableFunction(dr)(s,diff=True) which would perform essentially the same operation. Now you could also use a better formula to compute the derivative or use more complicated tools, but given how nicely behaved the consumption function is I doubt you would gain much from it.

albop commented 4 years ago

"Taking a derivative of a function" is a good example of the kind of thing that is provided in a more robust and generic way by lower-level scientific computing libraries.

This is the scipy generic numerical implementation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.derivative.html

This is about the sympy symbolic computation of the derivative: https://dev.to/erikwhiting88/calculate-derivative-functions-in-python-h58

Imagining a future where HARK and dolo are merged, we might ask "Which implementation of the derivative would we use? HARK or dolo?" I hope the answer is "Neither. It's better to use the more generic library function that's supported by a wider community".

I'm all in favour of resorting to the best library for a given task, but one has to understand precisely the nature of this task and its level of genericity. This instance is, I believe, a bad example:

sbenthall commented 4 years ago

Thanks for all this @albop. I'll have an implementation in the DARKolo chimera shortly based on your first recommendation.

As for the discussion of the implementation of differentials:

About the nuances of different implementations of the differential, I am learning a great deal from you and defer to you.

As a principle of software engineering, I believe I'm on solid ground about locating generic mathematical functions in more general libraries.

I wonder if any of these scipy.interpolation packages are more relevant to this particular problem: https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html

Perhaps there is an additional pedagogical goal here: sometimes the code in HARK and dolo is meant to be instructive in itself? That could be a difficult balance.

sbenthall commented 4 years ago

MPC plot now has dolo line, computed as per @albop's first recommendation.

image

sbenthall commented 4 years ago

@llorracc I've ticketed up the issue with the kinks in the HARK MPC here:

https://github.com/econ-ark/HARK/issues/434

albop commented 4 years ago

hmm, the two MPCs don't look good... At least the HARK one is strictly decreasing. Does the dolo one gets smoother if you increase a lot the number of points ?

albop commented 4 years ago

Re: scipy.interpolate the reasons to develop an alternative were the following:

sbenthall commented 4 years ago

Does the dolo one gets smoother if you increase a lot the number of points ?

No it does not.

sbenthall commented 4 years ago
  • One reason it has to remain an independent library is the use of jit generated and compiled functions with Numba.

I see now. Thank you for explaining that.

llorracc commented 4 years ago

@albop, @sbenthall : I briefly discussed the bottom part of the figure in the related thread in which @mnwhite participated as well.

Basically, we need to:

  1. Increase (considerably) the number of points in the discrete approximations to the transitory and permanent shocks;
  2. Use linear splines for the approximation to the decision rules;
  3. Not plot points where the derivative does not exist (that is, the upper and lower derivatives are different)
albop commented 4 years ago

@sbenthall, I had a quick check with the dolo yaml file. Increasing the number of points removes the non-monotonicities, which makes sense. There are still the same glitches as with HARK, which are simlarly due to the discretization of shocks. Hereere, I would say the main factor is I would say the size of the state-space [0,100], relative to the number of points (1000). There are very few points to approximate the region close to the constraints. Here linear splines are good because they keep the monotonicity, but they are accurate only by accident. On such a big state-space it seems important to have non-regularly spaced grids. I'll open an issue for that.

albop commented 4 years ago

marginal_propensity

albop commented 4 years ago

Sorry, legend missing marginal_propensity

llorracc commented 4 years ago

HARK's standard is a triple-exponential grid that starts at the lower bound of the problem. (Like, if the natural borrowing constraint is -0.3, we specify the extra grid as the grid of points above -0.3). Triple exponential in practice seems to do a reasonable job of putting an appropriately dense grid near the bottom where the functions typically have the greatest nonlinearity.

Of course, it would be more elegant to specify the number of degrees of exponentiality; 0 would be linear, 1 would be exponential, 2 would be double-exponential, and the HARK default of 3 would be a commonly chosen example (rather than a hard-wired default).

On Mon, Nov 11, 2019 at 3:02 PM Pablo Winant notifications@github.com wrote:

@sbenthall https://github.com/sbenthall, I had a quick check with the dolo yaml file. Increasing the number of points removes the non-monotonicities, which makes sense. There are still the same glitches as with HARK, which are simlarly due to the discretization of shocks. Hereere, I would say the main factor is I would say the size of the state-space [0,100], relative to the number of points (1000). There are very few points to approximate the region close to the constraints. Here linear splines are good because they keep the monotonicity, but they are accurate only by accident. On such a big state-space it seems important to have non-regularly spaced grids. I'll open an issue for that.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK74DUP5RHBIA3MBUY2LQTG26FA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDX6KIA#issuecomment-552592672, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK7625K4KV452IODUA6LQTG26FANCNFSM4JJH46CA .

--

mnwhite commented 4 years ago

It's not a hardwired default. The user can choose the degree of "exponential nesting" in the parameter dictionary for each instance of IndShockConsumerType they make. I think it's ExpNestCount

We've had it that way since 2015, since pre-HARK code that Nate wrote.

On Mon, Nov 11, 2019, 6:52 PM Christopher Llorracc Carroll < notifications@github.com> wrote:

HARK's standard is a triple-exponential grid that starts at the lower bound of the problem. (Like, if the natural borrowing constraint is -0.3, we specify the extra grid as the grid of points above -0.3). Triple exponential in practice seems to do a reasonable job of putting an appropriately dense grid near the bottom where the functions typically have the greatest nonlinearity.

Of course, it would be more elegant to specify the number of degrees of exponentiality; 0 would be linear, 1 would be exponential, 2 would be double-exponential, and the HARK default of 3 would be a commonly chosen example (rather than a hard-wired default).

On Mon, Nov 11, 2019 at 3:02 PM Pablo Winant notifications@github.com wrote:

@sbenthall https://github.com/sbenthall, I had a quick check with the dolo yaml file. Increasing the number of points removes the non-monotonicities, which makes sense. There are still the same glitches as with HARK, which are simlarly due to the discretization of shocks. Hereere, I would say the main factor is I would say the size of the state-space [0,100], relative to the number of points (1000). There are very few points to approximate the region close to the constraints. Here linear splines are good because they keep the monotonicity, but they are accurate only by accident. On such a big state-space it seems important to have non-regularly spaced grids. I'll open an issue for that.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK74DUP5RHBIA3MBUY2LQTG26FA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDX6KIA#issuecomment-552592672 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAKCK7625K4KV452IODUA6LQTG26FANCNFSM4JJH46CA

.

--

  • Chris Carroll

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=ADKRAFJB6PF6UY2OJPHIZLTQTHV5DA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDYRLYQ#issuecomment-552670690, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFLVJLML56VZCHWOYC3QTHV5DANCNFSM4JJH46CA .

llorracc commented 4 years ago

Great. Seems like 3 is the default, but glad to know it's flexible.

This should be easy to port to dol(o/ARK)?

On Tue, Nov 12, 2019 at 7:20 AM Matthew N. White notifications@github.com wrote:

It's not a hardwired default. The user can choose the degree of "exponential nesting" in the parameter dictionary for each instance of IndShockConsumerType they make. I think it's ExpNestCount

We've had it that way since 2015, since pre-HARK code that Nate wrote.

On Mon, Nov 11, 2019, 6:52 PM Christopher Llorracc Carroll < notifications@github.com> wrote:

HARK's standard is a triple-exponential grid that starts at the lower bound of the problem. (Like, if the natural borrowing constraint is -0.3, we specify the extra grid as the grid of points above -0.3). Triple exponential in practice seems to do a reasonable job of putting an appropriately dense grid near the bottom where the functions typically have the greatest nonlinearity.

Of course, it would be more elegant to specify the number of degrees of exponentiality; 0 would be linear, 1 would be exponential, 2 would be double-exponential, and the HARK default of 3 would be a commonly chosen example (rather than a hard-wired default).

On Mon, Nov 11, 2019 at 3:02 PM Pablo Winant notifications@github.com wrote:

@sbenthall https://github.com/sbenthall, I had a quick check with the dolo yaml file. Increasing the number of points removes the non-monotonicities, which makes sense. There are still the same glitches as with HARK, which are simlarly due to the discretization of shocks. Hereere, I would say the main factor is I would say the size of the state-space [0,100], relative to the number of points (1000). There are very few points to approximate the region close to the constraints. Here linear splines are good because they keep the monotonicity, but they are accurate only by accident. On such a big state-space it seems important to have non-regularly spaced grids. I'll open an issue for that.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <

https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK74DUP5RHBIA3MBUY2LQTG26FA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDX6KIA#issuecomment-552592672

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AAKCK7625K4KV452IODUA6LQTG26FANCNFSM4JJH46CA

.

--

  • Chris Carroll

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=ADKRAFJB6PF6UY2OJPHIZLTQTHV5DA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDYRLYQ#issuecomment-552670690 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ADKRAFLVJLML56VZCHWOYC3QTHV5DANCNFSM4JJH46CA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK766HGSIPVZBK2P2DG3QTKNPRA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2CDLI#issuecomment-552870317, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK75DPRTZNKLFUWM3NY3QTKNPRANCNFSM4JJH46CA .

--

mnwhite commented 4 years ago

Probably, yes. But whoever does it should look carefully at that code, not just translate it directly. I don't think it does exactly what it's supposed to, but this isn't the venue to go deeply into that.

On Tue, Nov 12, 2019 at 9:20 AM Christopher Llorracc Carroll < notifications@github.com> wrote:

Great. Seems like 3 is the default, but glad to know it's flexible.

This should be easy to port to dol(o/ARK)?

On Tue, Nov 12, 2019 at 7:20 AM Matthew N. White <notifications@github.com

wrote:

It's not a hardwired default. The user can choose the degree of "exponential nesting" in the parameter dictionary for each instance of IndShockConsumerType they make. I think it's ExpNestCount

We've had it that way since 2015, since pre-HARK code that Nate wrote.

On Mon, Nov 11, 2019, 6:52 PM Christopher Llorracc Carroll < notifications@github.com> wrote:

HARK's standard is a triple-exponential grid that starts at the lower bound of the problem. (Like, if the natural borrowing constraint is -0.3, we specify the extra grid as the grid of points above -0.3). Triple exponential in practice seems to do a reasonable job of putting an appropriately dense grid near the bottom where the functions typically have the greatest nonlinearity.

Of course, it would be more elegant to specify the number of degrees of exponentiality; 0 would be linear, 1 would be exponential, 2 would be double-exponential, and the HARK default of 3 would be a commonly chosen example (rather than a hard-wired default).

On Mon, Nov 11, 2019 at 3:02 PM Pablo Winant <notifications@github.com

wrote:

@sbenthall https://github.com/sbenthall, I had a quick check with the dolo yaml file. Increasing the number of points removes the non-monotonicities, which makes sense. There are still the same glitches as with HARK, which are simlarly due to the discretization of shocks. Hereere, I would say the main factor is I would say the size of the state-space [0,100], relative to the number of points (1000). There are very few points to approximate the region close to the constraints. Here linear splines are good because they keep the monotonicity, but they are accurate only by accident. On such a big state-space it seems important to have non-regularly spaced grids. I'll open an issue for that.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <

https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK74DUP5RHBIA3MBUY2LQTG26FA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDX6KIA#issuecomment-552592672

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AAKCK7625K4KV452IODUA6LQTG26FANCNFSM4JJH46CA

.

--

  • Chris Carroll

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <

https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=ADKRAFJB6PF6UY2OJPHIZLTQTHV5DA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDYRLYQ#issuecomment-552670690

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/ADKRAFLVJLML56VZCHWOYC3QTHV5DANCNFSM4JJH46CA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK766HGSIPVZBK2P2DG3QTKNPRA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2CDLI#issuecomment-552870317 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAKCK75DPRTZNKLFUWM3NY3QTKNPRANCNFSM4JJH46CA

.

--

  • Chris Carroll

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=ADKRAFKGOMFWK5N34QRP2LLQTK3UBA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2MUHY#issuecomment-552913439, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFOP2WWI2MZRHPXLN6LQTK3UBANCNFSM4JJH46CA .

albop commented 4 years ago

Not sure how to port that but I've been thinking about it for a while. The plan would be to define a new family of grids, where each 1d coordinate is a given nonlinear transform of a linearly spaced grid. Multidimensional grid is then the Cartesian product of the transforms. A decision rule can easily be defined on this rule by nonlinear (re)-scaling before and after applying the decision rule. It is kind of low-tech solution but possibly more efficient than treating this kind of grid as irregular ( since there is no search). I'm note sure about a natural way to code that kind of grid in a yaml file. I'll create an issue on dolark since this is where we experiment with new decision rule objects.

On Tue, Nov 12, 2019, 4:44 PM Matthew N. White notifications@github.com wrote:

Probably, yes. But whoever does it should look carefully at that code, not just translate it directly. I don't think it does exactly what it's supposed to, but this isn't the venue to go deeply into that.

On Tue, Nov 12, 2019 at 9:20 AM Christopher Llorracc Carroll < notifications@github.com> wrote:

Great. Seems like 3 is the default, but glad to know it's flexible.

This should be easy to port to dol(o/ARK)?

On Tue, Nov 12, 2019 at 7:20 AM Matthew N. White < notifications@github.com

wrote:

It's not a hardwired default. The user can choose the degree of "exponential nesting" in the parameter dictionary for each instance of IndShockConsumerType they make. I think it's ExpNestCount

We've had it that way since 2015, since pre-HARK code that Nate wrote.

On Mon, Nov 11, 2019, 6:52 PM Christopher Llorracc Carroll < notifications@github.com> wrote:

HARK's standard is a triple-exponential grid that starts at the lower bound of the problem. (Like, if the natural borrowing constraint is -0.3, we specify the extra grid as the grid of points above -0.3). Triple exponential in practice seems to do a reasonable job of putting an appropriately dense grid near the bottom where the functions typically have the greatest nonlinearity.

Of course, it would be more elegant to specify the number of degrees of exponentiality; 0 would be linear, 1 would be exponential, 2 would be double-exponential, and the HARK default of 3 would be a commonly chosen example (rather than a hard-wired default).

On Mon, Nov 11, 2019 at 3:02 PM Pablo Winant < notifications@github.com

wrote:

@sbenthall https://github.com/sbenthall, I had a quick check with the dolo yaml file. Increasing the number of points removes the non-monotonicities, which makes sense. There are still the same glitches as with HARK, which are simlarly due to the discretization of shocks. Hereere, I would say the main factor is I would say the size of the state-space [0,100], relative to the number of points (1000). There are very few points to approximate the region close to the constraints. Here linear splines are good because they keep the monotonicity, but they are accurate only by accident. On such a big state-space it seems important to have non-regularly spaced grids. I'll open an issue for that.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <

https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK74DUP5RHBIA3MBUY2LQTG26FA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDX6KIA#issuecomment-552592672

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AAKCK7625K4KV452IODUA6LQTG26FANCNFSM4JJH46CA

.

--

  • Chris Carroll

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <

https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=ADKRAFJB6PF6UY2OJPHIZLTQTHV5DA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDYRLYQ#issuecomment-552670690

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/ADKRAFLVJLML56VZCHWOYC3QTHV5DANCNFSM4JJH46CA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <

https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK766HGSIPVZBK2P2DG3QTKNPRA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2CDLI#issuecomment-552870317

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AAKCK75DPRTZNKLFUWM3NY3QTKNPRANCNFSM4JJH46CA

.

--

  • Chris Carroll

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=ADKRAFKGOMFWK5N34QRP2LLQTK3UBA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2MUHY#issuecomment-552913439 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ADKRAFOP2WWI2MZRHPXLN6LQTK3UBANCNFSM4JJH46CA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AACDSKNUPVTC2BI24ALAOXDQTLFNTA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2VWHA#issuecomment-552950556, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACDSKKSXDQLLFDHPTEU2BLQTLFNTANCNFSM4JJH46CA .

sbenthall commented 4 years ago

Ok. I have increased the HARK TranShkCount and PermShkCount.

image

Sounds like fixing the dolo MPC requires adding exponential grids to dolo, which is out of scope of this ticket.

@llorracc I don't understand your points here:

Use linear splines for the approximation to the decision rules; Not plot points where the derivative does not exist (that is, the upper and lower derivatives are different)

I believe pyplot.plot is plotting lines between the (x,y) points given to it.

What is gained by not plotting the points where the derivative doesn't exist?

sbenthall commented 4 years ago

It's smoother with 25 shocks and I'll push this change soon.

image

llorracc commented 4 years ago

If the derivative does not exist, the graph should not be depicting a value for the derivative at that point. Depicting a value for something that does not have a value fails to convey the fact that the thing does not actually have a value. So, basically, there should be a big vertical gap between 1 and the (much lower) MPC that applies at the point where the constraint stops binding.

On Tue, Nov 12, 2019 at 2:58 PM Sebastian Benthall notifications@github.com wrote:

Ok. I have increased the HARK TranShkCount and PermShkCount.

[image: image] https://user-images.githubusercontent.com/68752/68705430-0e1af800-055c-11ea-9bba-cc9afbb610ff.png

Sounds like fixing the dolo MPC requires adding exponential grids to dolo, which is out of scope of this ticket.

@llorracc https://github.com/llorracc I don't understand your points here:

Use linear splines for the approximation to the decision rules; Not plot points where the derivative does not exist (that is, the upper and lower derivatives are different)

I believe pyplot.plot is plotting line between the (x,y) points given to it.

What is gained by not plotting the points where the derivative doesn't exist?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK72572WPGDBFGKU2WULQTMDIBA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED3XPOI#issuecomment-553088953, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK7ZXQQVU6ZSHXTNJDTDQTMDIBANCNFSM4JJH46CA .

--

sbenthall commented 4 years ago

So, in the visualization above, there should not be a line segment between what looks like (1, 1) and (1, 4.2) ? (I'm eyeballing it).

Checking it out...what's happening is that there's two consecutive points being plotted: (0.96984785, 1) and (0.97785485, 0.41587754). It's the plotting library and the resolution of the data, not the underlying computation of the derivative.

There are a couple solutions to plotting discontinuities in functions. They appear to be:

https://stackoverflow.com/questions/10377593/how-to-drop-connecting-lines-where-the-function-is-discontinuous https://stackoverflow.com/questions/10377593/how-to-drop-connecting-lines-where-the-function-is-discontinuous

This is what the latter (dot) solution looks like.

image

I suppose it's also possible as you say to find the point of discontinuity and make sure it's included in the the x domain with a corresponding nan in the y. Sounds like a more involved feature though.

llorracc commented 4 years ago

The dots are OK. It's not worth the (small) extra effort to do it with a connected plot.

This is something where dolARK might need a bit of augmentation to get it right, though.

On Tue, Nov 12, 2019 at 5:57 PM Sebastian Benthall notifications@github.com wrote:

So, in the visualization above, there should not be a line segment between what looks like (1, 1) and (1, 4.2) ? (I'm eyeballing it).

Checking it out...what's happening is that there's two consecutive points being plotted: (0.96984785, 1) and (0.97785485, 0.41587754). It's the plotting library and the resolution of the data, not the underlying computation of the derivative.

There are a couple solutions to plotting discontinuities in functions. They appear to be:

  • insert a nan in the place where you don't want the line
  • plot dots instead of lines

https://stackoverflow.com/questions/10377593/how-to-drop-connecting-lines-where-the-function-is-discontinuous

https://stackoverflow.com/questions/10377593/how-to-drop-connecting-lines-where-the-function-is-discontinuous

This is what the latter (dot) solution looks like.

[image: image] https://user-images.githubusercontent.com/68752/68717819-b8ebe000-0575-11ea-8f62-f06f021a8493.png

I suppose it's also possible as you say to find the point of discontinuity and make sure it's included in the the x domain with a corresponding nan in the y. Sounds like a more involved feature though.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/DARKolo/issues/5?email_source=notifications&email_token=AAKCK72QPWHWJR4R336QBOLQTMYHLA5CNFSM4JJH46CKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED4H3RQ#issuecomment-553156038, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK75EXPRHJ2DF5ONFGU3QTMYHLANCNFSM4JJH46CA .

--

sbenthall commented 4 years ago

Dots are in. As perfecting this notebook depends on some dolo improvements, I'll close this ticket now. A new one can be made for the BufferStock improvements once the dolo features are worked out.