econ-ark / HARK

Heterogenous Agents Resources & toolKit
Apache License 2.0
328 stars 198 forks source link

cstwMPCagent reinvented into HARK? #669

Closed sbenthall closed 3 months ago

sbenthall commented 4 years ago

cstwMPCagent is depended on by the Uncertainty and the Saving Rate DEMARK.

It has some special properties that should be thought about--how to include it in a way tha seems less specialized?

BufferStockAgent?

https://github.com/llorracc/cstwMPC/blob/master/Code/Python/cstwMPC.py#L21

mnwhite commented 4 years ago

It doesn't look like the cstwMPC code would run at all. This looks wildly out of date, with references to variables that haven't been used since 2015.

On Thu, Apr 30, 2020 at 12:33 PM Sebastian Benthall < notifications@github.com> wrote:

cstwMPCagent is depended on by the Uncertainty and the Saving Rate DEMARK.

It has some special properties that should be thought about--how to include it in a way tha seems less specialized?

BufferStockAgent?

https://github.com/llorracc/cstwMPC/blob/master/Code/Python/cstwMPC.py#L21

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFLWALVGT4UN4LHHF33RPGR3ZANCNFSM4MVXZZWQ .

MridulS commented 4 years ago

Where is this repo produced from https://github.com/llorracc/cstwMPC/ ? @llorracc [assuming this repo is also autogenerated]

Before removing, I tried to match last update to HARK/cstwMPC with llorracc/cstwMPC. Last commit to llorracc/cstwMPC is April 12, 2019. I moved the changes after April 12, 2019 to HARK/cstwMPC to the new repo llorracc/cstwMPC.

Should I send in a new PR to llorracc/cstwMPC to override all the files which used to be in HARK/cstwMPC?

mnwhite commented 4 years ago

I think so, yes. The version in llorracc doesn't even use packaged HARK. It's the first Python version of cstwMPC I wrote, from late 2015 or early

  1. That version has been almost entirely replaced.

On Fri, May 1, 2020 at 4:54 PM Mridul Seth notifications@github.com wrote:

Where is this repo produced from https://github.com/llorracc/cstwMPC/ ? @llorracc https://github.com/llorracc [assuming this repo is also autogenerated]

Before removing, I tried to match last update to HARK/cstwMPC with llorracc/cstwMPC. Last commit to llorracc/cstwMPC is April 12, 2019. I moved the changes after April 12, 2019 to HARK/cstwMPC to the new repo llorracc/cstwMPC.

Should I send in a new PR to llorracc/cstwMPC to override all the files which used to be in HARK/cstwMPC?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-622564097, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFKVZT6JEHIW3QNKA4DRPMZHJANCNFSM4MVXZZWQ .

llorracc commented 4 years ago

Yes, the public llorracc/cstwMPC is generated, from a private repo to which I have just invited Mridul and Seb, llorracc/cstwMPC-Ur

My guess is that the reason the files are out of date is that this was meant to be the “archival” version that reproduces exactly what was produced in the published paper.

Presumably subsequent changes (like, using the packaged HARK), were made in the cstwMPC directory that we have recently removed from main-HARK.

Per our discussions, maybe what we should do is:

  1. Restore whatever was the last version of the cstwMPC content before it was removed.
  2. Rename it to, say, BufferStockEconomy
    • Do a search-and-replace for all occurrences of the string “cstwMPC” and replace them with “BufferStock”
  3. Turn it into a REMARK

and then at some point we will find the time to extract the parts of it that are general purpose tools and put them back into HARK.

On Fri, May 1, 2020 at 4:56 PM Matthew N. White notifications@github.com wrote:

I think so, yes. The version in llorracc doesn't even use packaged HARK. It's the first Python version of cstwMPC I wrote, from late 2015 or early

  1. That version has been almost entirely replaced.

On Fri, May 1, 2020 at 4:54 PM Mridul Seth notifications@github.com wrote:

Where is this repo produced from https://github.com/llorracc/cstwMPC/ ? @llorracc https://github.com/llorracc [assuming this repo is also autogenerated]

Before removing, I tried to match last update to HARK/cstwMPC with llorracc/cstwMPC. Last commit to llorracc/cstwMPC is April 12, 2019. I moved the changes after April 12, 2019 to HARK/cstwMPC to the new repo llorracc/cstwMPC.

Should I send in a new PR to llorracc/cstwMPC to override all the files which used to be in HARK/cstwMPC?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-622564097, or unsubscribe < https://github.com/notifications/unsubscribe-auth/ADKRAFKVZT6JEHIW3QNKA4DRPMZHJANCNFSM4MVXZZWQ

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-622565018, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKCK774YCTQ35XPGHPBLL3RPMZQTANCNFSM4MVXZZWQ .

--

sbenthall commented 4 years ago

The old cstwMPC code that was in HARK is available in any earlier release: https://github.com/econ-ark/HARK/tree/0.10.6/HARK/cstwMPC

I'm not sure what all the renaming accomplishes. There is already a BufferStockTheory REMARK.

I had been under the impression until recently that the cstwMPC REMARK already existed and was located at https://github.com/llorracc/cstwMPC

Now it sounds like you are telling me that there is another layer of intrigue--that the cstwMPC I thought I knew was a mere decoy, and the real cstwMPC was hidden away in a private repo all along.

Obviously, if you hide the "real code" in a private repo, nobody is going to update it. The rationale for having a generator repository is opaque to me. I recommend reducing the complexity of the number of repositories and classes in play.

Why not:

sbenthall commented 4 years ago

Ok, Chris has convinced me privately that he was right, and I was wrong.

I think I did not originally see that his proposal included implicitly the deprecation of the current cstwMPC REMARK code. The new REMARK will become the Single Source of Truth for this functionality.

I suppose this new REMARK will go in the REMARK repository, not in a separate standalone repository.

The remaining issue is the name. I gather than what makes cstwMPC agents special is their awareness of the aggregate economy conditions.

What if the new REMARK was called MarketAwareness. The agent type could be a MarketAwareAgentType. Just spitballing here.

mnwhite commented 4 years ago

There is nothing special about cstwMPC agents. There is no new solution code in there. There is no new model. The version(s) of the model in cstwMPC that use aggregate shocks simply use AggShockConsumerType. The extensions to CobbDouglasEconomy are literally just for calculating some statistics for the cstwMPC paper. Being aware of the aggregate economy is part of a core HARK class.

On Tue, May 5, 2020 at 8:55 AM Sebastian Benthall notifications@github.com wrote:

Ok, Chris has convinced me privately that he was right, and I was wrong.

I think I did not originally see that his proposal included implicitly the deprecation of the current cstwMPC REMARK code. The new REMARK will become the Single Source of Truth for this functionality.

I suppose this new REMARK will go in the REMARK repository, not in a separate standalone repository.

The remaining issue is the name. I gather than what makes cstwMPC agents special is their awareness of the aggregate economy conditions.

What if the new REMARK was called MarketAwareness. The agent type could be a MarketAwareAgentType. Just spitballing here.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-624038084, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFKU4T2XYYGX7WLVXI3RQAEDNANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

Thank you @mnwhite that is very clarifying.

In that case, I'd argue that the HARK/cstwMPC code should be moved to HARK/examples/cstwMPC and rewritten to use AggShockConsumerType more explicitly.

The extensions could be monkey patched into the instances created in the examples.

Alternatively, this rewrite could be made into a REMARK. The one tricky thing about renaming the REMARK, now that I think about it, is that the REMARK is about reproducing a paper---in this case, the cstwMPC paper.

If we are trying to help people build similar models, we should be directing users to the HARK classes, not the cstwMPC code.

sbenthall commented 4 years ago

Oh, I see how this works now:

https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L27-L34

So really we are talking about the fate of a small bit of code wrapping the library functionality. Some of that extra code--the Lorenz share stuff--can be rewritten to use the library code.

What is this code for? It is not up to date with HARK master and breaking for downstream users. Should they remove this code block and just use the inherited method? https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L51-L74

mnwhite commented 4 years ago

cstwMPC uses AggShockConsumerType very explicitly, right at the top of its main file. It makes a custom subclass from IndShockConsumerType or AggShockConsumerType because there are different versions of the model; the same small modifications need to be made either way. But looking at it now, it looks like there are some outdated names; kGrid doesn't exist as an attribute anymore.

On Tue, May 5, 2020 at 9:32 AM Sebastian Benthall notifications@github.com wrote:

Thank you @mnwhite https://github.com/mnwhite that is very clarifying.

In that case, I'd argue that the HARK/cstwMPC code should be moved to HARK/examples/cstwMPC and rewritten to use AggShockConsumerType more explicitly.

The extensions could be monkey patched into the instances created in the examples.

Alternatively, this rewrite could be made into a REMARK. The one tricky thing about renaming the REMARK, now that I think about it, is that the REMARK is about reproducing a paper---in this case, the cstwMPC paper.

If we are trying to help people build similar models, we should be directing users to the HARK classes, not the cstwMPC code.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-624057203, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFMRCGHYFPS4BJFDL3LRQAIOXANCNFSM4MVXZZWQ .

mnwhite commented 4 years ago

Yeah, just delete that entire codeblock. It was there because there's a weird quirk in how the model was written on paper, which doesn't exactly fit with how the model works in HARK. It's an extremely minor difference in how taxes to fund unemployment benefits are calculated.

On Tue, May 5, 2020 at 9:45 AM Sebastian Benthall notifications@github.com wrote:

Oh, I see how this works now:

https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L27-L34

So really we are talking about the fate of a small bit of code wrapping the library functionality. Some of that extra code--the Lorenz share stuff--can be rewritten to use the library code.

What is this code for? It is not up to date with HARK master and breaking for downstream users. Should they remove this code block and just use the inherited method?

https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L51-L74

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-624064330, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFJW5CUMTGQBZYDLTBDRQAJ6JANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

That is interesting. This is a nice use case for exploring how we might make HARK more extensible to support minor model differences with less custom code.

mnwhite commented 4 years ago

Yeah. In this case, we would want more customizability in the income process. What you see here is a piecemeal / patchwork of several legacy systems. It is bad.

On Tue, May 5, 2020 at 9:58 AM Sebastian Benthall notifications@github.com wrote:

That is interesting. This is a nice use case for exploring how we might make HARK more extensible to support minor model differences with less custom code.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-624072151, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFLUIKYSAVICFHND72LRQALRFANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

673

frankovici commented 4 years ago

Here's the downstream user that Seb mentioned :) I am completely fine with deleting https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L51-L74 for my purposes and using the inherited method for updating the income process!

sbenthall commented 4 years ago

@frankovici thank you! As you can see from this thread, the code is currently in flux. But the more you can build off of the HARK library code directly, rather than the cstwMPC code, the easier it will be for us to support your work.

mnwhite commented 4 years ago

I disagree with this for two reasons:

1) It's project-specific to cstwMPC

2) Even if it weren't, the method used in cstwMPC was extremely slow and inefficient, and there is a much better way of accomplishing the same thing. This isn't something we should want other people to use.

On Thu, May 14, 2020 at 11:47 AM Sebastian Benthall < notifications@github.com> wrote:

@llorracc https://github.com/llorracc says that cstwMPC.findLorenzDistanceAtTargetKY and cstwMPC.getKYratioDifference should be in HARK.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-628722218, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFKPMKXIIQPQ6NJIA2TRRQHCJANCNFSM4MVXZZWQ .

llorracc commented 4 years ago

It is not at all specific to cstwMPC. Lots of people are now estimating the degree of heterogeneity necessary to achieve a given dispersion of assets. Dirk Krueger's handbook of macro paper is one example. The fiscal policy paper Edmund and I are working on with Norwegians is another. It is a generically useful thing to be able to do.

If there's a better way to do it, that's fine; but I do not want to let the best be the enemy of the adequate here. And I want to have the FriedmanBufferStockEconomy or whatever we want to call it be a core tool in HARK going forward: One in which you find the distribution of parameters such that you match a target (like the wealth distribution).

mnwhite commented 4 years ago

Can you explain what you mean by the FriedmanBufferStockEconomy? I think of that as a model in which there are permanent and transitory aggregate shocks, permanent and transitory idiosyncratic shocks, and the interest and wage rates determined as the marginal product of capital and labor (i.e. competitive factor markets). That's in HARK already as AggShockConsumerType and CobbDouglasEconomy, and its extension AggShockMarkovConsumerType and CobbDouglasMarkovEconomy.

On Thu, May 14, 2020 at 12:10 PM Christopher Llorracc Carroll < notifications@github.com> wrote:

It is not at all specific to cstwMPC. Lots of people are now estimating the degree of heterogeneity necessary to achieve a given dispersion of assets. Dirk Krueger's handbook of macro paper is one example. The fiscal policy paper Edmund and I are working on with Norwegians is another. It is a generically useful thing to be able to do.

If there's a better way to do it, that's fine; but I do not want to let the best be the enemy of the adequate here. And I want to have the FriedmanBufferStockEconomy or whatever we want to call it be a core tool in HARK going forward: One in which you find the distribution of parameters such that you match a target (like the wealth distribution).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-628735386, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFJSK4NHOE5UKLEZKUTRRQJVRANCNFSM4MVXZZWQ .

llorracc commented 4 years ago

The crucial extra part that is NOT in there is the part that makes it match Friedman (1963): an MPC of 33 percent (or thereabouts). That is, the part that Seb proposed to add: The ability to match a lorenz distribution of wealth to a distribution of parameters. So far as either of us can tell, that is NOT in current HARK.

sbenthall commented 4 years ago

Putting aside the current implementation (which I agree with @mnwhite could be improved a lot), I want to be clear on just what distributeParams was supposed to be doing.

An example of its use is cell [8] here: https://github.com/econ-ark/DemARK/blob/0.10.5/notebooks/Uncertainty-and-the-Saving-Rate.ipynb

The current code for this method is here: https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L240

If I understand correctly, the point of this code is to be able to say:

In the DemARK example, the parameter that is distributed over is the Discount Factor (not the wealth, though I understand why these are connected).

I agree with @mnwhite that the current implementation, as a method on a Market subclass, is quite odd.

It seems like it would be more clean to have this be a way you could choose to parameterize a model.

So, rather than assigning a number to DiscFac in the parameters dictionary as is done here: https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1585

Rather, the user could assign a distribution to that parameter:

'DiscFac' : Uniform(mu=1.0065863855906343, sigma=0.0019501105739768)

Then each agent could sample from this distribution to get their discount factor. (Or you could try to use the 'exact match' mechanic here, I suppose).

mnwhite commented 4 years ago

The distribution of the parameter is at the AgentType level, not the agent level. That's why it's a discretized distribution: there are 7 types of agents with these 7 discount factors, approximating a uniform distribution over a given range. For each discount factor in the model, a microeconomic problem needs to be solved by backward iteration. I actually don't think it's that weird to have this as a method in a Market subclass, but we can change it.

In my comments about being slow, I was referring to our numeric method for fitting the Lorenz curve while maintaining an exact match for the aggregate capital to income ratio. The function tries to minimize the distance between simulated and actual Lorenz curves by manipulating one parameter (nabla), the half-width of the uniform distribution. For any proposed nabla, it repeatedly solves and simulates the models for different values of the center of the parameter distribution (beta grave, in our notation) to find the one matches the K/Y ratio exactly. It then computes the "Lorenz distance" at that K/Y-matching parameter value and returns it.

This is extremely inefficient. The model is solved and simulated (and results computed) ~10-15 times for each nabla that's considered. In the closed economy variants, the economy needs to be solved and simulated ~6-8 times for each of those 10-15 beta-graves that are considered, as it also needs to find the general equilibrium using our Krussell-Smith-like method. All for the sake of saying the model only has one parameter, because the other would-be free parameter governs one moment, which is matched exactly.

That's (essentially) mathematically equivalent to estimating a two parameter model by the simulated method of moments while putting infinite weight on fitting the aggregate K/Y ratio relative to fitting the Lorenz curve. Any minimizing method would refuse to make progress in the nabla direction unless it was absolutely sure that it wasn't hurting its fit of K/Y. It's an extreme version of the Rosenbrock banana function https://en.wikipedia.org/wiki/Rosenbrock_function: too much weight on one part of the objective function relative to others. The obvious solution with Rosenbrock is to turn down that weight (the $b$ parameter in the Wiki article), minimize, turn it up, and repeat.

We can do the same thing in cstwMPC. Rather than demanding that K/Y == 10.260000, just make it a target moment with the same weight as each of the Lorenz points, and estimate it by Nelder-Mead (or a better ND minimizer). Then turn up the weight on the K/Y moment to 5x that of the Lorenz points and estimate again, using the original estimate as a starting point. Then do it again for 25x, 125x, 625x. Or go by powers of 10 and do 1, 10, 100,

  1. I did this a while back.

Putting 1000x more weight on K/Y relative to the Lorenz points will leave you with a final simulated K/Y of 10.260134 (or whatever) rather than 10.260000... but the Mathematica code (on which the Python had to be based) only gets you to 10.258 or so. So we say we put infinite weight on the K/Y ratio and use an extremely inefficient, wasteful algorithm because of that... and then turn its tolerance so high (because it takes so long to run) that it doesn't even match 2 decimal places.

On Fri, May 15, 2020 at 8:17 AM Sebastian Benthall notifications@github.com wrote:

Putting aside the current implementation (which I agree with @mnwhite https://github.com/mnwhite could be improved a lot), I want to be clear on just what distributeParams was supposed to be doing.

An example of its use is cell [8] here:

https://github.com/econ-ark/DemARK/blob/0.10.5/notebooks/Uncertainty-and-the-Saving-Rate.ipynb

The current code for this method is here: https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/cstwMPC.py#L240

If I understand correctly, the point of this code is to be able to say:

  • For a given model, for a given agent parameter, and a given mathematical distribution...
  • ... make it so that the agents, collectively, have the parameter distributed accordingly.

In the DemARK example, the parameter that is distributed over is the Discount Factor (not the wealth, though I understand why these are connected).

I agree with @mnwhite https://github.com/mnwhite that the current implementation, as a method on a Market subclass, is quite odd.

It seems like it would be more clean to have this be a way you could choose to parameterize a model.

So, rather than assigning a number to DiscFac in the parameters dictionary as is done here:

https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1585

Rather, the user could assign a distribution to that parameter:

'DiscFac' : Uniform(mu=1.0065863855906343, sigma=0.0019501105739768)

Then each agent could sample from this distribution to get their discount factor. (Or you could try to use the 'exact match' mechanic here, I suppose).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-629202484, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFMYKTIX2JEGULUDPZTRRUXE5ANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

The distribution of the parameter is at the AgentType level, not the agent level.

Oh, I see. A Market has a list of AgentTypes. https://github.com/econ-ark/HARK/blob/master/HARK/core.py#L851

... In that case, why not just move the distributeParams method in as-is?

[I would change the 'exec' statement to something else, but that's minor]

sbenthall commented 4 years ago

Also, since we now have a way of specifying a discretized distribution object, it would be easy enough to have that passed in directly.

mnwhite commented 4 years ago

We could, but it should probably be generalized, or at least improved. As is, it can only handle one parameter... or if you do call it twice, the agent types will vary on those parameters colinearly. Maybe that's what the user meant, maybe not.

On Fri, May 15, 2020 at 9:31 AM Sebastian Benthall notifications@github.com wrote:

The distribution of the parameter is at the AgentType level, not the agent level.

Oh, I see. A Market has a list of AgentTypes. https://github.com/econ-ark/HARK/blob/master/HARK/core.py#L851

... In that case, why not just move the distributeParams method in as-is?

[I would change the 'exec' statement to something else, but that's minor]

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-629237536, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFKH6GJBLJ2L7PACOE3RRU73XANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

I hesitate to say this, but I think a more general treatment of this suggests a deeper consideration of the HARK architecture that goes well beyond the scope of bringing in the features @llorracc wants in the short term.

For example, there seems to be some ambiguity in the design, where it's possible to simulate many agents with different levels of wealth with a single AgentType instance, but not many agents with different discount factors.

If I'm reading this right, it's impossible for HARK to simulate heterogenous agents unless they are interact through a market. But I gather that @llorracc is becoming interested in ergodic distributions with heterogeneous agents in models without market interactions. And even if the only "interesting" cases do involve market mechanisms, being able to do such a simple (but still heterogeneous) simulation cleanly would be useful as a test case.

sbenthall commented 4 years ago

I've made #692 to make it this particular aspect of this issue more concrete.

mnwhite commented 4 years ago

We can move it as is and worry about expanding later.

As for heterogeneity, the distinction here is between ex ante vs ex post heterogeneity. An instance of an AgentType subclass represents a collection of ex ante homogeneous agents-- they are all of the same "type". They share the exact same preference parameters (including their discount factor), face the same distribution of risks, and experience the same transitions between states (conditional on controls and shocks). If they started in the same state and were given identical shock sequences, two agents in the same AgentType instance would behave identically. They are ex ante homogeneous, but end up ex post heterogeneous because they actually get different idiosyncratic shock draws over time.

Ex ante heterogeneity in HARK is captured by having different AgentType instances in the same setting. Maybe they share the same class, maybe they don't, but something about the agents is different before anything "happens" in the model: they have different preferences, or a different concept of the world they live in, or are an entirely different kind of agent (a worker vs a firm vs a bank, say). Ex post heterogeneity just requires setting the AgentCount attribute in any AgentType instance to be greater than 1. That's it.

TLDR: You can't simulate agents with different discount factors in the same AgentType instance because that's a contradiction of the terminology.

You can simulate ex ante heterogeneous agents within a single AgentType instance; that's the basic use case, in fact. The concept of an ergodic distribution for wealth definitely belongs at the AgentType level; similarly, calculating the ergodic distribution for discrete variables like age (t_age and t_cycle in HARK notation) should live in AgentType subclasses. The latter is a much, much easier problem mathematically. Like, not a problem.

On Fri, May 15, 2020 at 9:50 AM Sebastian Benthall notifications@github.com wrote:

I hesitate to say this, but I think a more general treatment of this suggests a deeper consideration of the HARK architecture that goes well beyond the scope of bringing in the features @llorracc https://github.com/llorracc wants in the short term.

For example, there seems to be some ambiguity in the design, where it's possible to simulate many agents with different levels of wealth with a single AgentType instance, but not many agents with different discount factors.

If I'm reading this right, it's impossible for HARK to simulate heterogenous agents unless they are interact through a market. But I gather that @llorracc https://github.com/llorracc is becoming interested in ergodic distributions with heterogeneous agents in models without market interactions. And even if the only "interesting" cases do involve market mechanisms, being able to do such a simple (but still heterogeneous) simulation cleanly would be useful as a test case.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-629246864, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFM3WPJJI6KVG7JHI53RRVCBVANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

As an aside...I know it's probably baked in at this point, but having a set of classes all having "Type" at the end is a bit redundant, from a programming perspective. Since a class defines a type. I'm sure you know that and this is just flagging an artifact.

And I see what you are saying about ex post and ex ante heterogeneity, which is extremely clarifying:

If I am not mistaken, there is currently no support for simulation of the behavior of ex ante heterogeneous models without market interaction.

mnwhite commented 4 years ago

No, this use of Type is intentional. The word Type is not referring to what this kind of agent's model is, but instead what their ex ante homogeneous "type" is. There can be many instances of IndShockConsumerType: the type that has DiscFac=0.96, the type that has DiscFac=0.92, etc. I intentionally did not call the superclass Agent, nor the model classes IndShockConsumer (e.g.) because instances of these classes represent a type of agents, and there might be many agents of that type.

There is support in HARK for simulating the behavior of ex ante heterogeneous agents without market interaction:

MyTypes = [ThisType, ThatType, OtherType] multiThreadCommands(MyTypes, ['initializeSim()','simulate()']

On Fri, May 15, 2020 at 10:30 AM Sebastian Benthall < notifications@github.com> wrote:

As an aside...I know it's probably baked in at this point, but having a set of classes all having "Type" at the end is a bit redundant, from a programming perspective. Since a class defines a type. I'm sure you know that and this is just flagging an artifact.

And I see what you are saying about ex post and ex ante heterogeneity, which is extremely clarifying:

  • ex post heterogeneity is modeled by having AgentCount > 1 on a since AgentType instance
  • ex ante heterogeneity is modeled by having multiple AgentType instances

If I am not mistaken, there is currently no support for simulation of the behavior of ex ante heterogeneous models without market interaction.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-629267627, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFIIF3EVL2VZUWXMQILRRVGZJANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

Just for the record... I defer to @mnwhite on the K/Y and Lorenz curve fitting issues, which at this point are over my head.

@llorracc I recommend decoupling these topics. I'll work to get distributeParams in, which will make it possible to release an updated "Uncertainty" DemARK with the discount factor distribution precomputed.

Later, when @mnwhite has a more efficient curve algorithm in HARK, the DemARK can be updated with the option to use it.

In the meantime, you can use the 0.10.5 version of the DemARK for your more flexible uses.

Does that sound OK?

sbenthall commented 4 years ago

because instances of these classes represent a type of agents, and there might be many agents of that type.

OK. And am I right in thinking that the reason for this is the presumed performance benefit of simulating the agents all at once?

There is support in HARK for simulating the behavior of ex ante heterogeneous agents without market interaction: MyTypes = [ThisType, ThatType, OtherType] multiThreadCommands(MyTypes, ['initializeSim()','simulate()']

Ah, interesting. I'll have to look more carefully at this functionality.

mnwhite commented 4 years ago

There's a definite performance benefit in simulation-- you get to use array operations rather than working on each agent independently. But structurally it's there because all agents of the same type can be solved simultaneously. If I have 10,000 agents that all have the exact same problem and exact same parameters, I don't solve a backward induction loop 10,000 times. I solve the model once for all of them.

On Fri, May 15, 2020 at 10:41 AM Sebastian Benthall < notifications@github.com> wrote:

because instances of these classes represent a type of agents, and there might be many agents of that type.

OK. And am I right in thinking that the reason for this is the presumed performance benefit of simulating the agents all at once?

There is support in HARK for simulating the behavior of ex ante heterogeneous agents without market interaction: MyTypes = [ThisType, ThatType, OtherType] multiThreadCommands(MyTypes, ['initializeSim()','simulate()']

Ah, interesting. I'll have to look more carefully at this functionality.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-629273082, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFPGU5SDA2XZ6K35MITRRVIBTANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

Ok. I'm trying to get a working PR in for distributeParams and am hung up on a new issue.

The original code for that method depends on a Population parameter on the market instance. This appears to be sui generis to cstwMPC, only defined here: https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/SetupParamsCSTW.py#L262

I'm unclear about its role and whether it should be preserved as part of the new patch.

mnwhite commented 4 years ago

This represents the total number of agents, across all types. It's probably one of the features that makes distributeParams particular to cstwMPC rather than general. I'd say remove it for the general version.

On Fri, May 15, 2020 at 11:00 AM Sebastian Benthall < notifications@github.com> wrote:

Ok. I'm trying to get a working PR in for distributeParams and am hung up on a new issue.

The original code for that method depends on a Population parameter on the market instance. This appears to be sui generis to cstwMPC, only defined here:

https://github.com/econ-ark/HARK/blob/0.10.6/HARK/cstwMPC/SetupParamsCSTW.py#L262

I'm unclear about its role and whether it should be preserved as part of the new patch.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/econ-ark/HARK/issues/669#issuecomment-629282841, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKRAFO3CA4AMLAM6BJ5EQLRRVKKVANCNFSM4MVXZZWQ .

sbenthall commented 4 years ago

Ok, got it. let's continue this discussion on the PR, which is now ready for review