econ-ark / REMARK

Replications and Explorations Made using the ARK
Apache License 2.0
19 stars 56 forks source link

Fix cgm portfolio #68

Closed Mv77 closed 4 years ago

Mv77 commented 4 years ago

This fixes the CGMPortfolio remark.

The main issues were tools that no longer existed, different treatment of simulation results (the new '.history' property), and complications addressed in https://github.com/econ-ark/HARK/pull/750 and https://github.com/econ-ark/HARK/pull/749.

On top of checking that everything now runs, I checked the exercises in the appendix, which test special cases of the CPM tool when there is an analytical solution. They now look well. The only puzzling behavior is that the risky share does not converge exactly to the Merton-Samuelson limit: there is a very tiny gap even when using fine grids and discretizations.

llorracc commented 4 years ago

very tiny gap even when using fine grids and discretizations.

Possibly this reflects the integral between the topmost discrete point, and infinity. Maybe it goes to zero very slowly ...

mnwhite commented 4 years ago

One way to check convergence to the M-S without going to an insane number of integration nodes (like a million) is to implement Gaussian nodes for the lognormal. That should make the expectation more accurate for any given number of nodes (other than 1). That's beyond the scope of this PR, because that should be in HARK.

llorracc commented 4 years ago

Am agreed that it is beyond the scope of the PR -- something one has to get used to with numerical stuff is knowing when results are "close enough" that further pursuit would be a waste of time. In this case, for example, if

  1. discrepancies are pretty tiny across a variety of different choices of mean and variance; and
  2. discrepancies reliably go down as the number of nodes is increased

you can have a reasonable degree of confidence that the proposition (in this case, numerical convergence to the M-S limit) is true.

Mv77 commented 4 years ago

This is the picture in question

https://github.com/Mv77/REMARK/blob/fixCGMPortfolio/REMARKs/CGMPortfolio/Code/Python/Figures/Merton_Samuelson_Limit.jpg

The gap did improve as I increased the number of gridpoints and probability-mass points, but this is as close as that got me.

I also increased the maximum grid point for assets.

Here is the code that produces the figure: https://github.com/Mv77/REMARK/blob/fixCGMPortfolio/REMARKs/CGMPortfolio/Code/Python/Appendix/MertonSamuelson.py . The way of computing the limit had also changed from our initial version (I think we used a HARK tool that has since disappeared). I messed with the expression for computing the limit (log-returns vs return factors, adding the \sigma^2/2 term to the premium) and never managed to get it closer so I left it as is.

Could this be a numerical/approximation error or do you think there is an analytical error in the limit?

mnwhite commented 4 years ago

What was the largest number of integration nodes you used? I think ConsPortfolioModel.py has a tool for computing the limiting solution for an arbitrary discrete distribution (which can be an approximation of a lognormal); were you using that, or the analytic M-S?

On Fri, Jul 3, 2020 at 12:14 PM Mateo Velásquez-Giraldo < notifications@github.com> wrote:

This is the picture in question

https://github.com/Mv77/REMARK/blob/fixCGMPortfolio/REMARKs/CGMPortfolio/Code/Python/Figures/Merton_Samuelson_Limit.jpg

The gap did improve as I increased the number of gridpoints and probability-mass points, but this is as close as that got me.

I also increased the maximum grid point for assets.

Here is the code that produces the figure: https://github.com/Mv77/REMARK/blob/fixCGMPortfolio/REMARKs/CGMPortfolio/Code/Python/Appendix/MertonSamuelson.py . The way of computing the limit had also changed from our initial version (I think we used a HARK tool that has since disappeared). I messed with the expression for computing the limit (log-returns vs return factors, adding the \sigma^2/2 term to the premium) and never managed to get it closer so I left it as is.

Could this be a numerical/approximation error or do you think there is an analytical error in the limit?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/econ-ark/REMARK/pull/68#issuecomment-653611660, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFPSFPZ7NQQCL2X6LILRZX7VPANCNFSM4OPLWZPQ .

Mv77 commented 4 years ago

I used up to 100 points for the RiskyDstn.

Yes! checking the class and past code, we were initially using 'RiskyShareLimitFunc' which seems to have been replaced by 'updateShareLimit'. I just used the new 'updateShareLimit' and it works very precisely: the shares do converge to it quickly.

However, the exercise was meant to check whether this limit coincided with the 'premium/(rra*variance)' formula. So I guess using these functions is kind of cheating.

When he tweaked the remark, @MridulS correctly replaced the use of these functions with the analytical formula. The issue is that now, running the code, the shares converge to the formula's limit only to the extent in the picture (with the tiny gap).

mnwhite commented 4 years ago

What's happening here is that the equiprobable approximation always undershoots the variance of the true lognormal distribution, so even with 100 nodes its not picking up the full variance. Thus the limiting share of the approximation is always higher than the M-S limit.

Chris, do we want to show both the "approximated" limiting lower bound and the M-S limit with a "normal" number of nodes, to show the difference? If not, this PR is ready to merge I think.

On Fri, Jul 3, 2020 at 1:31 PM Mateo Velásquez-Giraldo < notifications@github.com> wrote:

I used up to 100 points for the RiskyDstn.

Yes! checking the class and past code, we were initially using 'RiskyShareLimitFunc' which seems to have been replaced by 'updateShareLimit'. I just used the new 'updateShareLimit' and it works very precisely: the shares do converge to it quickly.

However, the exercise was meant to check whether this limit coincided with the 'premium/(rra*variance)' formula. So I guess using these functions is kind of cheating.

When he tweaked the remark, @MridulS https://github.com/MridulS correctly replaced the use of these functions with the analytical formula. The issue is that now, running the code, the shares converge to the formula's limit only to the extent in the picture (with the tiny gap).

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/econ-ark/REMARK/pull/68#issuecomment-653630917, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFI2IQGJ2S4K4KIO4PDRZYIVNANCNFSM4OPLWZPQ .

mnwhite commented 4 years ago

Ugh, that sentence got mangled. The phrase "with a 'normal' number of nodes" was supposed to modify "the 'approximated' limiting lower bound".

llorracc commented 4 years ago

Hmmm, after giving it some thought here's what I propose we should do:

  1. First point to make is that the Merton-Samuelson formula does not require that the distribution be lognormal (or anything else); the lognormal case is convenient because there's an analytical solution in that case, but the actual correct Merton-Samuelson solution in the case where you've got a discrete approximation is the answer you get when you solve the problem numerically.
  2. But this setup constitutes a nice "teachable moment" about:
    • The difference between numerical and analytical solutions
    • The fact that there is a really truly "correct" answer even when the solution method is numerical
    • The point that doing a numerical approximation to a case where there is an analytical solution in the limit is an excellent way to debug your code

So, probably the best way to illustrate it is actually to do the numerical solution with a small number of gridpoints (5? 7?) so that the correct M-S numerical solution to the problem using the approximated distribution is visibly distinct (even if not very much) from the analytical solution you get when you assume a true lognormal. It is then easy (in class; and in the notebook) to make the point that "a good way of double-checking the code is to see whether the numerical solution gets ever closer to the analytical solution as the number of gridpoints gets larger."

On Fri, Jul 3, 2020 at 2:34 PM Matthew N. White notifications@github.com wrote:

Ugh, that sentence got mangled. The phrase "with a 'normal' number of nodes" was supposed to modify "the 'approximated' limiting lower bound".

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/econ-ark/REMARK/pull/68#issuecomment-653645024, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK76DCQA4CZY7PEWHG23RZYQELANCNFSM4OPLWZPQ .

--

Mv77 commented 4 years ago

Great, I'll get on it. I will produce two pictures then. Both will have the analytical and numerical limits, but will have a different number of points for the RiskyDstn. The idea will be to show that the two limits will be closer when there are more points. Will commit in a couple of minutes.

Mv77 commented 4 years ago

The previous commit plots both the MS analytical solution and the exact numerical solution from HARK.

It does so with either 5 or 50 equiprobable points generating the figures: https://github.com/Mv77/REMARK/blob/fixCGMPortfolio/REMARKs/CGMPortfolio/Code/Python/Figures/Merton_Samuelson_Limit_5.jpg and https://github.com/Mv77/REMARK/blob/fixCGMPortfolio/REMARKs/CGMPortfolio/Code/Python/Figures/Merton_Samuelson_Limit_50.jpg Which show that the limits get closer as the number of points increase.

Mv77 commented 4 years ago

The last commit removes the Merton Samuelson exercise from the remark.

It now lives in the CPM example file in HARK: https://github.com/econ-ark/HARK/blob/master/examples/ConsumptionSaving/example_ConsPortfolioModel.py

I made sure the do_all and tex files work.

I think this is now ready to merge.

MridulS commented 4 years ago

Just for reference: @llorracc nbreproduce --docker econark/econ-ark-notebook do_all.sh works locally on this too where do_all.sh is

python3 -m pip install -U git+https://github.com/econ-ark/hark # this is required as this REMARK needs current master branch
ipython do_ALL.py