Closed MattHJensen closed 7 years ago
How do I read the equations?
dan
On Wed, 9 Nov 2016, Matt Jensen wrote:
Proposal for proxy welfare measures
This note outlines a methodology for computing proxy welfare measures for use in assessing the normative implications of changes in tax policy. It primarily relies on estimates of changes in tax liabilities from Tax-Calculator. In addition, users may assume an ad hoc revenue offset or use an estimate of the revenue offset from another economic model to estimate a balanced-budget welfare impact.[^1] See Table Shells.xlsx for sample output files.
User input: (i) welfare measures requested (step one only; steps one and two; or steps one, two, and three), (ii) income classifiers and ranges, (iii) choice of $\theta$ for use in estimating welfare impacts (default is 0, 2, and 4), (iv) consumption level at the assumed upper bound for marginal utility (default \$1,000), (v) assumed marginal utility of income for tax units with zero or negative income in the baseline (default zero), (vi) choice of revenue offset (default 15 percent), and (vii) income elasticity (default 0.4).
Step 1: Static change in after-tax income without financing, welfare impacts per assumed specification for utility of consumption
- Using the incidence assumptions otherwise reflected in the model, compute after-tax income under the baseline and under the policy alternative for each tax unit.
- For each income class compute the aggregate (for the class) percentage change in after-tax income.
For the specified set of values for $\theta,$ compute the change in welfare for each tax unit with positive after-tax income in the baseline as
$$\Delta u = \left( \frac{\overset{\overline{}}{c}}{10^{9}} \right)\frac{\left( \left( \frac{\max\left( c^{'},c{\min} \right)}{\overset{\overline{}}{c}} \right)^{1 - \theta} - \left( \frac{\max\left( c,c{\min} \right)}{\overset{\overline{}}{c}} \right)^{1 - \theta} \right)}{\left( 1 - \theta \right)}$$
$+ \left( \frac{\overset{\overline{}}{c}}{10^{9}} \right)\left( \frac{c{\min}}{\overset{\overline{}}{c}} \right)^{- \theta} \times \left( \min\left( 0,\frac{\left( c^{'} - c{\min} \right)}{\overset{\overline{}}{c}} \right) - \min\left( 0,\frac{\left( c - c_{\min} \right)}{\overset{\overline{}}{c}} \right) \right)$,
where $c$ is the baseline after-tax income, $c^{'}$ is the alternative after-tax income, $c_{\min}$ is the consumption level at the assumed upper bound for marginal utility, and $\overset{\overline{}}{c}$ is the average after-tax income in the baseline.[^2]^,^[^3] Compute the change in utility for each tax unit with zero or negative after-tax income in the baseline as
$\Delta u = u^{'} \times \left( c^{'} - c \right)$,
where $u^{'}$ is the assumed marginal utility of income for tax units with zero or negative income in the baseline.^4
Step 2: Add static change in after-tax income with financing (including behavioral impact in the determination of the amount of financing) and corresponding welfare impacts
- Compute aggregate static change in tax liability.
- Compute the required financing as the fraction of the tax change not offset by the revenue feedback.[^5]
- In principle, the revenue feedback should include all levels of government, including feedback to state and local government budgets.
- Allocate the required financing to tax units on a lump-sum basis (per person or per adult).[^6]
- For each income class compute the aggregate (for the class) percentage change in after-tax income including financing.
- For the specified set of values for $\theta,$ compute the change in utility for each tax unit as in step one using the change in after-tax income including financing.
Step 3: Add direct welfare impact of reoptimization in response to changes in tax rates (indirect welfare impact of the revenue offset is already included at step two)
- Compute an effective income change for the value of reoptimization for each tax unit as
$$\frac{1}{2}\text{εy}\frac{\left( \Delta\left( 1 - t \right) \right)^{2}}{1 - t_{0}},$$
where$\ y$ is pre-tax income, $\varepsilon$ is the assumed income elasticity, $\Delta\left( 1 - t \right)$ is the change in the keep rate, and $t_{0}$ is the baseline marginal tax rate.[^7]
- For each income class compute the aggregate (for the class) percentage change in after-tax income including both financing and the effective income change from reoptimization.
- For the specified set of values for $\theta,$ compute the change in utility for each tax unit as in step one using the change in after-tax income including financing and the effective income change from reoptimzation.
- Until this point, the distinction between curvature of utility at the family level and any potential additional curvature in the social welfare function could largely be ignored. At this point, the methodology is implicitly assuming linearity at the household level to derive the effective income change.[^8]
Relates to
403.
[^1]: Such an offset could reflect the microeconomic behavioral response in a conventional JCT score, macroeconomic feedback, or a combination of the two.
[^2]: The marginal utility of income increases without bound as income approaches zero in this specification of the utility of income. Imposing a bound on marginal utility prevents a small number of extremely low income tax units from driving the entirety of the results and could be appropriate if such tax units can finance consumption using transfers or loans from friends, family, or government programs not reflected in the model.
[^3]: In general, comparing changes in welfare across welfare measures is uninformative as the magnitudes of the differences do not necessarily have any meaning. However, within the class of CRRA specifications, normalizing by the baseline average after-tax income allows for rough comparisons consistent with the intuition that higher values of $\theta$ are consistent with a stronger redistributive motive. In addition, by further normalizing by 10^-9^ the $\theta = 0$ case is equal to the aggregate change after-tax income in billions of dollars.
[^4]: Assuming zero marginal utility of income for tax units with zero or negative incomes amounts to assuming they are high income families (or simply excluding them from the analysis).
[^5]: In general offsets can be greater than 100% and less than 0%. Offset must be specified by user or derived from another economic model.
[^6]: A typical (static) fully financed distribution table might include financing proportional to income or proportional to tax. Such options could be provided here, but the motivation for such options often is policies that would effectively increase marginal tax rates and thus allowing independence of the financing allocation and offset is potentially problematic. Additional allocation methods could be added as an option for user input. The superior approach would be to incorporate the implicit change in marginal rates reflected in a non-lump sum allocation of financing into the tax policy proposal contemplated thus allowing for consistent estimation of revenue effects and distributional impacts.
[^7]: This formula is an approximation based on a model in which there is a single consumption good and solely labor income and the elasticity is thus an elasticity of labor income with respect to the tax rate. Using a taxable income elasticity involves some conceptual slippage relative to the derivation, but would be broadly ok as an approximation for the additional margins of response. (The recommendation to use pre-tax income already reflects some of the same kinds of slippage.) Note that there is a relationship between the revenue offset of step two and the elasticity of step three, but allowing independent parameterizations is substantially simpler.
[^8]: In addition, note that the there is an important interaction between this approximation and the upper bound on marginal utility reflected in $\overset{\overline{}}{c}.$ If that bound is relaxed, the quality of this approximation becomes worse for tax changes that increase taxes for low-income families. However, in this case, even a more complex approximation for the welfare change due to reoptimzation such as $\frac{1}{2}\left( \left( \frac{c^{'}}{\overset{\overline{}}{c}} \right)^{- \theta}\left( 1 - t{1} \right) - \left( \frac{c}{\overset{\overline{}}{c}} \right)^{- \theta}\left( 1 - t{0} \right) \right)\text{εy}\frac{\Delta\left( 1 - t \right)}{1 - t_{0}}\ $ behaves poorly.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVX7UGypT9oOFpmMg94Lzom9zQj80ks5q8iFjgaJpZM4Kt5B-.gif]
@feenberg, here is a link to a document that contains the equations: Proposal for proxy welfare measures.docx
@feenberg said about the Microsoft Word (.docx) document mentioned by @MattHJensen in issue #1050:
How do I read the equations?
Can somebody with Microsoft Word convert the document to a PDF file that contains the equations, otherwise it would seem that to read the document you must have Microsoft Word. I don't have that program on my Mac, so I can't read the document either.
Proposal.for.proxy.welfare.measures.pdf PDF version
Actually, I have MS word, but gmail doesn't bring it in to word, but displays the rtf(?) instead.
dan On Sun, 13 Nov 2016, Martin Holmer wrote:
@feenberg said about the Microsoft Word (.docx) document mentioned by @MattHJensen in issue #1050:
How do I read the equations?
Can somebody with Microsoft Word convert the document to a PDF file that contains the equations, otherwise it would seem that to read the document you must have Microsoft Word. I don't have that program on my Mac, so I can't read the document either.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVeyRroz5pqcRMCzMETxLoogxTTXnks5q94UngaJpZM4Kt5B-.gif]
@codykallen, Thanks for making a PDF. The equations are included in this PDF version of the document.
Just a few comments.
Is a poll tax the right alternative for financing? That seems very odd to me. In the end we don't know how any tax reform will be financed. It could be proportional, progressive, or just defaulting on the national debt. Most tax-cutters believe it will be financed with a reduction in spending (without much evidence, of course). I don't suppose it is up to us to decide these things, but wouldn't something proportional to income be closer to plausible?
Can the author present sample values of theta and cmin that look plausible for a very wide range of incomes? $1000, 100 thousand and 100 million? Can we build in to the system a way to inform users of the consequences of their choice of parameters, by income? Perhaps a table or graph showing utility per marginal dollar by income? I understand it is hard to find parameters that look reasonable at high and low levels of income, and haven't seen how much cbar helps.
I recall a suggestion once made that the marginal utility at each income level be the after-tax share at that income level in the graduated income tax - that is, taken from Tables X,Y and Z (for some year, not necessarily the current law). That lets some say they prefer the 1962 or 1980 or 2001 law, and everyone will know what it means. With theta, it is rather mysterious.
There was a plan for TB in allow users to provide an elasticity of taxable income wrt the after tax price. It looks like that hasn't happened yet, but could not that be the source of the revenue offset, at least at the users option?
In the user documentation I hope that muli-level equations will be multi-level and not compressed down to a single level. They are harder to understand when squashed. Doesn't work do this yet? WordPerfect did it nicely in 1982. also, the CRRA utility should be introduced with a name and a reference and initally without the additional features.
dan
On Mon, 14 Nov 2016, codykallen wrote:
Proposal.for.proxy.welfare.measures.pdf PDF version
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVbac3MAhmmKd-nZM-0pCpRQwDni7ks5q-FcegaJpZM4Kt5B-.gif]
In a fully static, fully financed distribution table the relevant question is generally what the best approximation to the eventual financing will be (notwithstanding the difficulty of figuring out what that means/is). Conveniently, that the analysis is fully static allows the user to freely vary the financing method without changing the results. However, in an analysis that is not fully static, the relationship betewen the financing and the behavioral responses (microeconomic or macroeconomic) is part of what is being modeled and thus financing can't be independently specified. Then the question becomes what is the best way to generate appropriately consistent/coherent results.
If a user wants to model a tax package offset by a proportional-to-income spending change or tax change, the best answer is likely to come from imposing an approximation to a constant-percentage-point increase in the marginal rate across the board as part of the modeling. If the revenue offset is being generated by a model, then the model will be relying on the right marginal incentives. And if the revenue offset is being input on an ad hoc basis that offset can be based on a view about feedback from financed tax changes, which tends to be what comes out of the economic literature. (The most compelling counterexample to this argument in my mind would be in the case of a tax cut financed by a cut in defense spending under the view that the benefits of defense spending accrue rouglhy according to income but do not create clear implicit MTRs.)
That said, greater flexibilty can always be provided by allowing the user to pick a greater number of parameters and imposing fewer restrictions on the relationships between these parameters.
@gleiserson, Thank you for your thoughtful discussion on how to make normative welfare judgments in assessing tax reforms (which is contained in this PDF file).
I would like to confirm my understanding on a number of points you make and then pose some questions on other issues. These confirmation points and questions are all coming from my perspective that it would be desirable to add to Tax-Calculator the ability to conduct normative welfare assessments of tax reforms in a way that gives users the flexibility to specify the values of parameters that characterize the normative assumptions.
First the confirmations of what I think I understand.
(a) You propose to use an isoelastic (or power) utility function to make welfare judgments, right? (For those who want an introduction to isoelastic utility functions, see this article.)
(b) You propose to let users specify one or more values of the single coefficient in the utility function, right?
(c) You propose, in the Tax-Calculator context, to use after-tax expanded income as the (consumption) argument to the utility function, right? (Expanded income is the broadest measure of income available in Tax-Calculator.)
(d) In cases where after-tax expanded income is very low, you propose that the normative welfare analysis allow users to specify a minimum after-tax expanded income that prevents marginal utility from rising to very high levels, right?
If I've understood those points correctly, then I have questions about how you take a reform and reach a normative welfare judgment using a set of assumed welfare parameters (the utility coefficient and the minimum after-tax expanded income). I didn't see any equations embedded in your spreadsheet and the equations in the memo didn't seem to lead to a single number for the baseline and for the reform that could be compared to arrive at a normative assessment of the reform.
Instead of asking many questions about your proposal, I'm going to explain my own proposal about how to conduct a normative assessment of reform. This proposal is consistent with my understanding of points (a) through (d), and I think (but am not sure) that it is consistent with the rest of your proposal. I'm hoping that making a specific proposal will focus our discussion on how best to provide Tax-Calculator with this new capability.
Before showing the details of my proposal it is important to explain its broader context. Mathematically, my proposal is a simple application of expected utility theory using an isoelastic utility function. (For some background on expected utility see this article.) But why is expected utility theory a plausible framework for making this kind of normative welfare judgment about a tax reform? There are two very different philosophical perspectives that both suggest the use of expected utility theory.
The first perspective is that the user wants to be a social welfare planner. The probabilities for each filing unit is just their relative sampling weight, so that the filing units are given equal weight. The coefficient of the isoelastic utility function represents the planner's assumption about the marginal social utility of extra after-tax income across the income distribution.
The second perspective is that there is a "veil of ignorance" that prevents a soul from knowing what place in the income distribution (s)he will occupy. In making judgments about which kind of society (s)he would prefer, that soul could compute the expected utility of each kind of society (in our case, each tax system). The probabilities would be relative sampling weights to reflect the lack of knowledge about where the soul will end up in society. And the coefficient of the isoelastic utility function represents the soul's aversion to the risk of not knowing which station in life (s)he will occupy.
These are just two motivating stories about why you might want to do the calculations I propose below. You don't have to agree with the philosophical theories behind these arguments.
Here is a Python script that represents my proposal. In order to run this script now, you must be on the lump-sum-tax branch that forms Tax-Calculator pull request #1066. If you're not conversant with the concept of certainty equivalence (as a way of expressing expected utility values in more understandable units --- in terms of after-tax expanded income in our case), then read this article.
"""
The reform-exputil.py script explores ideas in GL memo cited in T-C issue #1050
"""
import sys
import math
import numpy as np
import pandas as pd
from taxcalc import Policy, Records, Calculator, weighted_sum
def isoelastic_utility_function(consumption, crra):
"""
Return isoelastic utility of specified non-negative consumption value
given specified non-negative value of the coefficient of relative
risk aversion crra.
Note: consumption and crra are floats
Note: returned utility value is a float
"""
if crra == 1.0:
return math.log(consumption)
else:
return (math.pow(consumption, (1.0 - crra)) - 1.0) / (1.0 - crra)
def expected_utility(cons, prob, crra):
"""
Return expected utility of consumption cons that has probability prob given
the specified non-negative value of constant-relative risk-aversion crra.
Note: prob and cons are arrays; crra is a float
Note: returned expected utility value is a float
"""
utility = cons.apply(isoelastic_utility_function, args=(crra,))
return np.inner(utility, prob)
def certainty_equivalent(exputil, crra):
"""
Return certainty-equivalent consumption for given expected utility exputil
and given constant-relative risk-aversion parameter crra of an
isoelastic utility function.
Note: exputil and crra are floats
Note: returned certainty equivalent value is a float
"""
if crra == 1.0:
return math.exp(exputil)
else:
return math.pow(((exputil * (1.0 - crra)) + 1.0), (1.0 / (1.0 - crra)))
def print_revenue(year, df1, df2):
"""
Print aggregate revenue under baseline and reform.
Note: year is int; df1 and df2 are Pandas DataFrame objects for
baseline and reform, respectively.
Note: nothing is returned.
"""
print 'Aggregate {} Tax Revenues ($billion)'.format(year)
print ' base reform diff'
resstr = '{} {:8.1f} {:8.1f} {:8.2f}'
baseln = weighted_sum(df1, '_payrolltax') * 1.0e-9
reform = weighted_sum(df2, '_payrolltax') * 1.0e-9
print resstr.format('paytax', baseln, reform, reform - baseln)
baseln = weighted_sum(df1, '_iitax') * 1.0e-9
reform = weighted_sum(df2, '_iitax') * 1.0e-9
print resstr.format('inctax', baseln, reform, reform - baseln)
baseln = weighted_sum(df1, '_combined') * 1.0e-9
reform = weighted_sum(df2, '_combined') * 1.0e-9
print resstr.format('bothtx', baseln, reform, reform - baseln)
def main():
"""
Contains the high-level logic of the script.
"""
# specify baseline and reform Calculator objects and calculate results
calc1 = Calculator(policy=Policy(), records=Records(), verbose=False)
calc2 = Calculator(policy=Policy(), records=Records(), verbose=False)
reform = {
2013: {"_SS_Earnings_c": [9e99],
"_FICA_ss_trt": [0.124000],
"_LST": [-416.31]}
}
# ... trt: current-law 0.124000 ; full-financing 0.104886
# ... LST: current-law 0.0 ; full-financing -416.31
calc2.policy.implement_reform(reform)
calc1.calc_all()
calc2.calc_all()
# tabulate aggregate 2013 tax revenues for baseline and reform
record_columns = ['s006', '_payrolltax', '_iitax',
'_combined', '_expanded_income']
out = [getattr(calc1.records, col) for col in record_columns]
df1 = pd.DataFrame(data=np.column_stack(out), columns=record_columns)
out = [getattr(calc2.records, col) for col in record_columns]
df2 = pd.DataFrame(data=np.column_stack(out), columns=record_columns)
print 'Reform trt={}'.format(reform[2013]['_FICA_ss_trt'])
print 'Reform LST={}'.format(reform[2013]['_LST'])
print_revenue(2013, df1, df2)
# calculate sample-weighted probability of each filing unit
prob = np.divide(df1['s006'], df1['s006'].sum())
# calculate consumption of each filing unit under baseline and reform
# assuming consumption equals after-tax expanded income
consumption_minimum = 1000
cons1 = np.maximum(df1['_expanded_income'] - df1['_combined'],
consumption_minimum)
cons2 = np.maximum(df2['_expanded_income'] - df2['_combined'],
consumption_minimum)
# calculate expected utility of baseline and reform consumption
crra_values = [0, 1, 2, 3, 4]
for crra in crra_values:
eu1 = expected_utility(cons1, prob, crra)
eu2 = expected_utility(cons2, prob, crra)
# print 'crra={} ==> EU1={} and EU2={}'.format(crra, eu1, eu2)
ce1 = certainty_equivalent(eu1, crra)
ce2 = certainty_equivalent(eu2, crra)
pctdiff = 100.0 * (ce2 / ce1 - 1.0)
print ('crra={} ==> CE1={:8.2f} and CE2={:8.2f} '
'and pctdiff={:5.2f}').format(crra, ce1, ce2, pctdiff)
# normal exit of main function
return 0
if __name__ == '__main__':
sys.exit(main())
I'll provide some results generated by this script in another comment to Tax-Calculator issue #1050.
@MattHJensen @feenberg @viard @codykallen @Amy-Xu @andersonfrailey @GoFroggyRun
Thanks @martinholmer for the thoughtful followup and proposed implementation. This comment responds to the questions. I then will follow up with another comment on the code/proposal.
(a) You propose to use an isoelastic (or power) utility function to make welfare judgments, right? (For those who want an introduction to isoelastic utility functions, see this article.)
See answer to (d) below.
(b) You propose to let users specify one or more values of the single coefficient in the utility function, right?
Yes.
(c) You propose, in the Tax-Calculator context, to use after-tax expanded income as the (consumption) argument to the utility function, right? (Expanded income is the broadest measure of income available in Tax-Calculator.)
Yes. (Or, more accurately, I propose to use the broadest measure of after-tax income available in Tax-Calculator, but I did not know what it was when I wrote the proposal.)
(d) In cases where after-tax expanded income is very low, you propose that the normative welfare analysis allow users to specify a minimum after-tax expanded income that prevents marginal utility from rising to very high levels, right?
I propose to use a slightly modified isoelastic/power/CRRA utility function. The proposed function is the CRRA utilty function above c_min and a linear function of consumption from 0 to c_min with slope equal to the marginal utility at c_min. That is, u(c) = ((max(c,c_min)^(1-\theta))-1)/(1-\theta) + ((c_min)^(-theta))*min(0,c-c_min).
In addition, to help make the results more intuitive (though it is not essential to the economics in any way), I propose to scale the measurement of consumption used as an argument by dividing by the mean of after-tax expanded income in the baseline and scale the function as a whole by multiplying by (mean baseline after-tax expanded income/10^9). While this does not change the results it makes comparisons across \theta somewhat more intuitive and allows the \theta = 0 case to be interpeted as billions of dollars.
The {\Delta}u equation on the first page of the proposal is a differenced version of this proposed utility specification for baseline after-tax income c and alternative after-tax income c^\prime.
@MattHJensen @feenberg @viard @codykallen @Amy-Xu @andersonfrailey @GoFroggyRun
In a prior comment in the conversation of issue #1050, I promised to provide an example of how to use the expected-utility framework to make normative welfare judgments about tax reforms. That framework is implemented in the reform-exputil.py
script that I posted in that earlier comment.
I pick for this example a reform that is easy to describe and that suggests one obvious "financing" method different from lump-sum financing. So, we'll estimate the revenue effects of the reform in isolation and then compute the expected utility (actually the certainty equivalent of the expected utility of after-tax expanded income) of two fully-financed versions of that reform.
The example reform is elimination beginning in 2013 of the OASDI payroll tax maximum taxable earnings. Such a reform makes the OASDI payroll tax base exactly the same as the Medicare (HI) payroll tax base. We "finance" this reform with a negative lump-sum tax and with a reduction in the OASDI payroll tax rate.
The following results are generated on the lump-sum-tax development branch, which is in pending pull request #1066.
Here are the results of the reform in isolation (that is, without any financing).
$ python reform-exputil.py
Reform trt=[0.124]
Reform LST=[0.0]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 1050.7 131.06
inctax 1192.6 1190.9 -1.63
bothtx 2112.2 2241.6 129.43
crra=0 ==> CE1=50707.76 and CE2=50288.73 and pctdiff=-0.83
crra=1 ==> CE1=25702.92 and CE2=25668.07 and pctdiff=-0.14
crra=2 ==> CE1= 9478.47 and CE2= 9477.37 and pctdiff=-0.01
crra=3 ==> CE1= 4106.90 and CE2= 4106.69 and pctdiff=-0.01
crra=4 ==> CE1= 2684.73 and CE2= 2684.61 and pctdiff=-0.00
Next are results when we finance the reform with lower payroll tax rates.
$ python reform-exputil.py
Reform trt=[0.104886]
Reform LST=[0.0]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 920.7 1.04
inctax 1192.6 1191.5 -1.03
bothtx 2112.2 2112.2 0.00
crra=0 ==> CE1=50707.76 and CE2=50701.82 and pctdiff=-0.01
crra=1 ==> CE1=25702.92 and CE2=25866.54 and pctdiff= 0.64
crra=2 ==> CE1= 9478.47 and CE2= 9524.31 and pctdiff= 0.48
crra=3 ==> CE1= 4106.90 and CE2= 4116.44 and pctdiff= 0.23
crra=4 ==> CE1= 2684.73 and CE2= 2688.49 and pctdiff= 0.14
And finally, the results when we finance the reform with a negative lump-sum tax.
$ python reform-exputil.py
Reform trt=[0.124]
Reform LST=[-416.31]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 1050.7 131.06
inctax 1192.6 1190.9 -1.63
bothtx 2112.2 2112.2 -0.00
crra=0 ==> CE1=50707.76 and CE2=51060.97 and pctdiff= 0.70
crra=1 ==> CE1=25702.92 and CE2=26528.13 and pctdiff= 3.21
crra=2 ==> CE1= 9478.47 and CE2=10044.15 and pctdiff= 5.97
crra=3 ==> CE1= 4106.90 and CE2= 4331.07 and pctdiff= 5.46
crra=4 ==> CE1= 2684.73 and CE2= 2803.05 and pctdiff= 4.41
@MattHJensen @feenberg @viard @codykallen @Amy-Xu @andersonfrailey @gleiserson
@gleiserson said:
I propose to use a slightly modified isoelastic/power/CRRA utility function. The proposed function is the CRRA utility function above c_min and a linear function of consumption from 0 to c_min with slope equal to the marginal utility at c_min. That is, u(c) = ((max(c,c_min)^(1-theta))-1)/(1-theta) + ((c_min)^(-theta))*min(0,c-c_min).
OK. How do propose to use this modified utility function? Do propose to use it to compute the expected utility of consumption (which is assumed to be after-tax expanded income)?
As @martinholmer points out, my proposal did not specify an aggregation rule to convert changes in utility at the tax unit level into aggregate welfare changes. The intended aggregation rule was to sum the tax unit level changes.
Thus, for purposes of the aggreate welfare measure, I believe @martinholmer 's code generates an order and sign-preserving re-scaling of my proposed measure. (eu1 and eu2 equal my proposed welfare measure divided by the number of tax units and the certainty equivalence computation is a strictly increasing function of income.)
While the two approaches are very closely related, an advantage of the sum approach is that it allows one to also consider welfare impacts on sub-groups of the population (which can be obtained by summing the tax-unit impacts for that sub-group). This can be an interesting question under either of of the two philosophical perspectives mentioned, though the interpretation is quite different. In the first it's a direct statement about welfare impacts on a group of people. In the second it's supplemental information that illustrates how the distribution of outcomes from which you are drawing changes as a result of the reform.
@MattHJensen @feenberg @viard @codykallen @Amy-Xu @andersonfrailey
Decisions about measuring social welfare should be answered by Hassett and Viard. They are not programming decisions.
I haven't seen a reply to my suggestion that a table showing marginal welfare by income, for widely varying incomes accompany these proposals. Are we of the opinion that a reasonable value of RRA is even available for the range of incomes in the PUF?
dan
Thanks @martinholmer for the sample results. The second and third cases illustrate one important difficulty in this exercise. In general, comparing across welfare measures is not that informative. Each welfare measure provides information about the relative ranking of policy regimes according to that measure but differences across measures in differences across policies don't mean much.
Notwithstanding this difficulty, it seems desirable to illustrate the impact of different choices of \theta so that the user understands the role of risk aversion/redistribution preference/etc. in the results. The potentially over-engineered scaling parameters in the proposal were intended to facilitate this kind of comparison. That said, they definitely do not solve all problems and one could argue that this is a case of better to make the limitations of the method as obvious as possible.
(Also, something seems potentially fishy in the CRRA = 0 case in the third case. If I'm understanding correctly, that's a fully static run with zero change in tax liability and linear utility. I would expect the values to be equal? The CRRA = 0 line for case two reached approximate equality.)
@feenberg, said:
Decisions about measuring social welfare should be answered by Hassett and Viard. They are not programming decisions.
The core maintainers of Tax-Calculator are @feenberg, @martinholmer, @amy-xu, @talumbau, and myself. Whatever is committed needs to be agreed upon by members of this group, not by anyone else. Feedback from others is, of course, very welcome.
@gleiserson said:
Also, something seems potentially fishy in the CRRA = 0 case in the third case. If I'm understanding correctly, that's a fully static run with zero change in tax liability and linear utility. I would expect the values to be equal? The CRRA = 0 line for case two reached approximate equality.
Excellent point! In what I've done so far, the very low after-tax expanded-income cases are handled differently than in your proposal. If after-tax expanded income is below a threshold (which in my examples is $1,000), then that filing unit is assumed to have after-tax expanded income of $1,000, which as you described in your memo can be imagined to come from family, friends, non-modeled private charity or government assistance. That is why, in the risk-neutral case, the fully-financed certainty-equivalent difference between current-law and reform is slightly different from zero.
@gleiserson said:
The second and third cases [in the issue #1050 examples] illustrate one important difficulty in this exercise. In general, comparing across welfare measures is not that informative. Each welfare measure provides information about the relative ranking of policy regimes according to that measure but differences across measures in differences across policies don't mean much.
Agreed. My examples followed your spreadsheet layout and are useful for thinking about issues during the development stage.
@gleiserson continued:
Notwithstanding this difficulty, it seems desirable to illustrate the impact of different choices of \theta so that the user understands the role of risk aversion/redistribution preference/etc. in the results. The potentially over-engineered scaling parameters in the proposal were intended to facilitate this kind of comparison. That said, they definitely do not solve all problems and one could argue that this is a case of better to make the limitations of the method as obvious as possible.
I guess this means that when we get to the implementation stage we will have to think carefully about exactly how to package this normative welfare capability and what output to show the user.
@gleiserson said:
I propose to use a slightly modified isoelastic/power/CRRA utility function. The proposed function is the CRRA utilty function above c_min and a linear function of consumption from 0 to c_min with slope equal to the marginal utility at c_min. That is,
u(c) = ((max(c,c_min)^(1-theta))-1)/(1-theta) + ((c_min)^(-theta))*min(0,c-c_min)
I understand your verbal description, but (unless I'm mixed up) your equation doesn't look like it represents your description. In particular, the expression min(0,c-c_min)
is non-positive, declining from zero to -c_min
as c
declines from c_min
to zero. Don't you want the utility of consumption for the c=c_min/2
case to be half the positive utility of consumption for the c=c_min
case?
If c < c_min, everything before the + sign evaluates to u(c_min). The marginal utility at c_min is (c_min)^(-theta) and will be positive for any positive value of c_min. Thus for any value c < c_min, the entire expression is u(c_min) + (marginal utility of c_min)*(distance from c_min). The negative sign on the distance results in the second term being subtracted from the first and should yield the desired value.
Note that u(c) is negative for all values of consumption for any value of theta > 1. Thus the answer to the question in your last sentence is no. It's not a straight line between the points (c_min,u(c_min)) and (0,0), it's extending a tangent line from (c_min,u(c_min)) to (0,u(c_min)+u^{prime}(c_min)*(-c_min).
@gleiserson said:
If c < c_min, everything before the + sign evaluates to u(c_min). The marginal utility at c_min is (c_min)^(-theta) and will be positive for any positive value of c_min. Thus for any value c < c_min, the entire expression is u(c_min) + (marginal utility of c_min)*(distance from c_min). The negative sign on the distance results in the second term being subtracted from the first and should yield the desired value.
Thanks for correcting my thinking. I now understand exactly what you're doing.
There seems to be just one remaining question. Will your approach work in a sensible way when c
is negative? There are more than a few filing units in the puf.csv
sample that have negative expanded income (usually because of large business losses).
It seems as if the next step is to develop another Python script that is similar to one I posted above in the conversation about issue #1050 but different in that it uses your modified utility function. Does that make sense?
@MattHJensen @feenberg @viard @codykallen @Amy-Xu @andersonfrailey
My best idea for handling zero/negative values of income is to simply assume a marginal utility for tax units with zero/negative values of income in the baseline and evaluate all changes using that marginal utility. For substantially negative values, I think zero is not a terrible approximation (since the tax units are likely high income and their number is small relative to the population -- though this is obviously less accurate in the theta = 0 case). For tax units with zero or slightly negative income it's less clear to me how good the approximation is, though they remain a small portion of the population. (Another approach would involve imputing baseline consumption values for these tax units, but that is a much more substantial modeling exercise.)
I've modified the reform-exputil.py
script used to generate the examples so that it can use either a modified isoelastic utility function proposed in issue #1050 by Greg Leiserson or a more standard isoelastic utility function as used in the earlier examples above. The only difference between these two approaches is in how to deal with filing units who have very low after-tax expanded income. The standard-utility-function approach simply assumes that things not modeled in Tax-Calculator will prevent consumption from falling below some specified minimum level. The modified-utility-function approach does assume consumption fails below the specified minimum level but assumes marginal utility does not rise above the level at the minimum consumption level.
I show the reform-exputil.py
script at the bottom of this comment. Here are four sets of results for our reform example, all of which assume the same minimum after-tax expanded income (or consumption) level of $1,000. The four sets of results differ with respect to how the "pop-the-cap" reform is financed and with respect to whether a modified or standard utility function is used in the calculations.
(1) Reform financed by lump-sum tax with a standard utility function:
$ python reform-exputil.py
GL_VERSION=False
c_min=1000
Reform trt=[0.124]
Reform LST=[-416.31]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 1050.7 131.06
inctax 1192.6 1190.9 -1.63
bothtx 2112.2 2112.2 -0.00
crra=0 ==> CE1=50707.76 and CE2=51060.97 and pctdiff= 0.70
crra=1 ==> CE1=25702.92 and CE2=26528.13 and pctdiff= 3.21
crra=2 ==> CE1= 9478.47 and CE2=10044.15 and pctdiff= 5.97
crra=3 ==> CE1= 4106.89 and CE2= 4331.05 and pctdiff= 5.46
crra=4 ==> CE1= 2679.33 and CE2= 2796.66 and pctdiff= 4.38
(2) Reform financed by lump-sum tax with a modified utility function:
$ python reform-exputil.py
GL_VERSION=True
c_min=1000
Reform trt=[0.124]
Reform LST=[-416.31]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 1050.7 131.06
inctax 1192.6 1190.9 -1.63
bothtx 2112.2 2112.2 -0.00
crra=0 ==> CE1=49985.58 and CE2=50359.10 and pctdiff= 0.75
crra=1 ==> CE1=12483.63 and CE2=13148.82 and pctdiff= 5.33
crra=2 ==> CE1= 1208.18 and CE2= 1247.76 and pctdiff= 3.28
crra=3 ==> CE1= 748.17 and CE2= 771.47 and pctdiff= 3.11
crra=4 ==> CE1= 593.82 and CE2= 616.22 and pctdiff= 3.77
(3) Reform financed by payroll-tax-rate change with a standard utility function:
$ python reform-exputil.py
GL_VERSION=False
c_min=1000
Reform trt=[0.104886]
Reform LST=[0.0]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 920.7 1.04
inctax 1192.6 1191.5 -1.03
bothtx 2112.2 2112.2 0.00
crra=0 ==> CE1=50707.76 and CE2=50701.82 and pctdiff=-0.01
crra=1 ==> CE1=25702.92 and CE2=25866.54 and pctdiff= 0.64
crra=2 ==> CE1= 9478.47 and CE2= 9524.31 and pctdiff= 0.48
crra=3 ==> CE1= 4106.89 and CE2= 4116.43 and pctdiff= 0.23
crra=4 ==> CE1= 2679.33 and CE2= 2683.08 and pctdiff= 0.14
(4) Reform financed by payroll-tax-rate change with a modified utility function:
$ python reform-exputil.py
GL_VERSION=True
c_min=1000
Reform trt=[0.104886]
Reform LST=[0.0]
Aggregate 2013 Tax Revenues ($billion)
base reform diff
paytax 919.6 920.7 1.04
inctax 1192.6 1191.5 -1.03
bothtx 2112.2 2112.2 0.00
crra=0 ==> CE1=49985.58 and CE2=49979.25 and pctdiff=-0.01
crra=1 ==> CE1=12483.63 and CE2=12558.34 and pctdiff= 0.60
crra=2 ==> CE1= 1208.18 and CE2= 1208.37 and pctdiff= 0.02
crra=3 ==> CE1= 748.17 and CE2= 747.93 and pctdiff=-0.03
crra=4 ==> CE1= 593.82 and CE2= 593.51 and pctdiff=-0.05
Notice that in both the standard utility function with minimum consumption and Greg's modified utility function approach, the change in certainty-equivalent after-tax expanded income when crra=0 is not exactly zero. It would be exactly zero if no filing units had after-tax expanded income below the minimum level. So, this is a blemish for both approaches.
And finally, here is the script that generates these four sets of results on the lump-sum-tax
branch included in pending pull request #1066:
"""
The reform-exputil.py script explores ideas in GL memo cited in T-C issue #1050
"""
import sys
import math
import numpy as np
import pandas as pd
from taxcalc import Policy, Records, Calculator, weighted_sum
GL_VERSION = True
c_min = 1000
# trt: current-law 0.124 ; full-financing 0.104886
trt = 0.104886
# lst: current-law 0.0 ; full-financing -416.31
lst = 0.0
def isoelastic_utility_function(consumption, crra):
"""
Return isoelastic utility of specified non-negative consumption value
given specified non-negative value of the coefficient of relative
risk aversion crra.
Note: consumption and crra are floats
Note: returned utility value is a float
"""
if GL_VERSION:
if consumption >= c_min:
if crra == 1.0:
return math.log(consumption)
else:
return math.pow(consumption, (1.0 - crra)) / (1.0 - crra)
else: # if consumption < c_min
if crra == 1.0:
tu_at_c_min = math.log(c_min)
else:
tu_at_c_min = math.pow(c_min, (1.0 - crra)) / (1.0 - crra)
mu_at_c_min = math.pow(c_min, -crra)
tu_at_c = tu_at_c_min + mu_at_c_min * (consumption - c_min)
return tu_at_c
else:
cons = max(consumption, c_min)
if crra == 1.0:
return math.log(cons)
else:
return math.pow(cons, (1.0 - crra)) / (1.0 - crra)
def expected_utility(cons, prob, crra):
"""
Return expected utility of consumption cons that has probability prob given
the specified non-negative value of constant-relative risk-aversion crra.
Note: prob and cons are arrays; crra is a float
Note: returned expected utility value is a float
"""
utility = cons.apply(isoelastic_utility_function, args=(crra,))
return np.inner(utility, prob)
def certainty_equivalent(exputil, crra):
"""
Return certainty-equivalent consumption for given expected utility exputil
and given constant-relative risk-aversion parameter crra of an
isoelastic utility function.
Note: exputil and crra are floats
Note: returned certainty equivalent value is a float
"""
if GL_VERSION:
if crra == 1.0:
tu_at_c_min = math.log(c_min)
else:
tu_at_c_min = math.pow(c_min, (1.0 - crra)) / (1.0 - crra)
if exputil >= tu_at_c_min:
if crra == 1.0:
return math.exp(exputil)
else:
return math.pow((exputil * (1.0 - crra)), (1.0 / (1.0 - crra)))
else:
mu_at_c_min = math.pow(c_min, -crra)
return ((exputil - tu_at_c_min) / mu_at_c_min) + c_min
else:
if crra == 1.0:
return math.exp(exputil)
else:
return math.pow((exputil * (1.0 - crra)), (1.0 / (1.0 - crra)))
def print_revenue(year, df1, df2):
"""
Print aggregate revenue under baseline and reform.
Note: year is int; df1 and df2 are Pandas DataFrame objects for
baseline and reform, respectively.
Note: nothing is returned.
"""
print 'Aggregate {} Tax Revenues ($billion)'.format(year)
print ' base reform diff'
resstr = '{} {:8.1f} {:8.1f} {:8.2f}'
baseln = weighted_sum(df1, '_payrolltax') * 1.0e-9
reform = weighted_sum(df2, '_payrolltax') * 1.0e-9
print resstr.format('paytax', baseln, reform, reform - baseln)
baseln = weighted_sum(df1, '_iitax') * 1.0e-9
reform = weighted_sum(df2, '_iitax') * 1.0e-9
print resstr.format('inctax', baseln, reform, reform - baseln)
baseln = weighted_sum(df1, '_combined') * 1.0e-9
reform = weighted_sum(df2, '_combined') * 1.0e-9
print resstr.format('bothtx', baseln, reform, reform - baseln)
def main():
"""
Contains the high-level logic of the script.
"""
# pylint: disable=too-many-locals
# specify baseline and reform Calculator objects and calculate results
calc1 = Calculator(policy=Policy(), records=Records(), verbose=False)
calc2 = Calculator(policy=Policy(), records=Records(), verbose=False)
reform = {
2013: {"_SS_Earnings_c": [9e99],
"_FICA_ss_trt": [trt],
"_LST": [lst]}
}
calc2.policy.implement_reform(reform)
calc1.calc_all()
calc2.calc_all()
# tabulate aggregate 2013 tax revenues for baseline and reform
record_columns = ['s006', '_payrolltax', '_iitax',
'_combined', '_expanded_income']
out = [getattr(calc1.records, col) for col in record_columns]
df1 = pd.DataFrame(data=np.column_stack(out), columns=record_columns)
out = [getattr(calc2.records, col) for col in record_columns]
df2 = pd.DataFrame(data=np.column_stack(out), columns=record_columns)
print 'GL_VERSION={}'.format(GL_VERSION)
print 'c_min={}'.format(c_min)
print 'Reform trt={}'.format(reform[2013]['_FICA_ss_trt'])
print 'Reform LST={}'.format(reform[2013]['_LST'])
print_revenue(2013, df1, df2)
# calculate sample-weighted probability of each filing unit
prob = np.divide(df1['s006'], df1['s006'].sum())
# calculate consumption of each filing unit under baseline and reform
# assuming consumption equals after-tax expanded income
cons1 = df1['_expanded_income'] - df1['_combined']
cons2 = df2['_expanded_income'] - df2['_combined']
# calculate expected utility of baseline and reform consumption
crra_values = [0, 1, 2, 3, 4]
for crra in crra_values:
eu1 = expected_utility(cons1, prob, crra)
eu2 = expected_utility(cons2, prob, crra)
pctdiff = 100.0 * (eu2 - eu1) / abs(eu1)
# print ('crra={} ==> EU1={:.6e} and EU2={:.6e} '
# 'and pctdiff={:5.2f}').format(crra, eu1, eu2, pctdiff)
ce1 = certainty_equivalent(eu1, crra)
ce2 = certainty_equivalent(eu2, crra)
pctdiff = 100.0 * (ce2 - ce1) / abs(ce1)
print ('crra={} ==> CE1={:8.2f} and CE2={:8.2f} '
'and pctdiff={:5.2f}').format(crra, ce1, ce2, pctdiff)
# normal exit of main function
return 0
if __name__ == '__main__':
sys.exit(main())
Comments?
@MattHJensen @feenberg @gleiserson @viard @codykallen @Amy-Xu @andersonfrailey
I am confused by the output in case (2). When CRRA = 0 my proposed modified utility function is identical to the "traditional" utility function. That is, u(c) = c if theta = 0 and so the tangent line for c < c_min is identical to the underlying function. Given this, a redistribution of income with no effect on total income should leave utility unchanged. Yet the output indicates certainty equivalent utility is up.
On the other hand, in case (1) there are (at least) two intuitions for why utility increases. We can think of it as a traditional utility function with an unmodeled income-generating process and the reform causes income to be created from nothing (good for welfare). Alternatively, we can think of it as a modified utility function in which people are indifferent between all levels of income less than c_min. Then the tax reform redistributes from people with zero marginal utility to people with positive marginal utility, increasing welfare.
While I understand how the code leads to a positive welfare effect for case (1), I'm not seeing what features of the code cause case (2) to differ from my economic intuition.
(In addition, the increase in certainty-equivalent utility for case (1) with CRRA = 0 confuses me. The increase in CE utility is 353.21, which seems quite large relative to the amount of the lump-sum rebate. Depending on what the average tax unit size is in the input data, this suggests that 1/4 of the rebate is going to people who are below c_min? I wonder if there is something else occurring outside this code that causes strange behavior across these cases.)
@gleiserson said:
I am confused by the output in case (2). When CRRA = 0 my proposed modified utility function is identical to the "traditional" utility function. That is, u(c) = c if theta = 0 and so the tangent line for c < c_min is identical to the underlying function. Given this, a redistribution of income with no effect on total income should leave utility unchanged. Yet the output indicates certainty equivalent utility is up [i.e., not zero].
You are right to wonder why, in the risk-neutral (crra=0) case, the certainty-equivalent after-tax expanded incomes are not exactly the same for the baseline and reform (because we conducting a static analysis of a fully-financed tax reform). In order to answer your question I've explored a number of issues and discovered something about how the expanded income variable is computed that needs to be fixed (see Tax-Calculator pull request #1077 if you're interested in the details).
So, now that I've fixed that problem, let me respond to your concerns using a different tax reform example. In this reform, the fifth, sixth, and seventh tax brackets have their rates set to 0.30 (instead of being 0.33, 0.35, and 0.396). This rate-cap reform has no other provisions. Here are the results of a static analysis of this reform using your modified isoelastic utility function:
$ python reform-exputil2.py
behavioral_response=False
c_min=1000
Reform hirt=0.3
Reform LST=196.813
Aggregate 2013 Tax Revenues ($billion)
baseline reform difference
paytax 919.6 919.6 0.000
inctax 1193.1 1131.9 -61.188
alltax 2112.8 2112.8 0.000
NOTE: avginc1=63025.81 and avginc2=63025.81 ==> diff=0.00
Certainty-Equivalent After-Tax Expanded Income ($)
crra baseline reform pctdiff
0 50059.03 50059.03 -0.00
1 12644.29 12308.44 -2.66
2 1225.42 1204.28 -1.73
3 759.64 746.68 -1.71
4 605.23 592.65 -2.08
Notice that the average before-tax expanded income in the baseline and the reform are exactly the same, and that a lump-sum tax of $196.813 per person exactly offsets the income tax revenue loss from the rate-cap reform. Also, notice that, in this situation, your expectation is realized: the certainty-equivalent after-tax expanded incomes (or "consumption") in the baseline and reform are exactly the same when crra=0.
Now let's assume that the behavioral substitution elasticity is 0.25, so that the lower top rates increase work effort and thus earnings after the reform. The increase in earnings among those in the top tax brackets will imply the need for a smaller lump-sum tax to fully finance this tax reform. Here are the results from the same script (which I'll show below) except that the behavioral elasticity of substitution has been changed from zero to 0.25 and the size of the lump-sum tax has been lowered from $196.813 to $129.986 per person.
$ python reform-exputil2.py
behavioral_response=True with _BE_sub=0.25
c_min=1000
Reform hirt=0.3
Reform LST=129.986
Aggregate 2013 Tax Revenues ($billion)
baseline reform difference
paytax 919.6 921.2 1.614
inctax 1193.1 1151.1 -42.026
alltax 2112.8 2112.8 0.000
NOTE: avginc1=63025.81 and avginc2=63433.14 ==> diff=407.33
Certainty-Equivalent After-Tax Expanded Income ($)
crra baseline reform pctdiff
0 50059.03 50466.37 0.81
1 12644.29 12429.50 -1.70
2 1225.42 1211.74 -1.12
3 759.64 751.33 -1.09
4 605.23 597.17 -1.33
Notice that with the increase in average before-tax expanded income (by $407.33) the certainty-equivalent after-tax expanded incomes in the crra=0 case are no longer equal in the baseline and the reform situation. The crra=0 assumption implies the reform increases social welfare because the increase in earnings raises some people's after-tax income and the shift in after-tax income from the poor to the rich is not a factor in a social welfare function that assumes crra=0. On the other hand, when the upward redistribution of after-tax income is given some weight in the social welfare function (that is, crra>=1), the overall assessment of this tax reform is negative despite the increase in earnings among those in the top tax brackets.
Here is the script that generated the above results:
$ cat reform-exputil2.py
"""
The reform-exputil2.py script explores ideas in GL memo cited in T-C issue #1050
"""
import sys
import math
import numpy as np
import pandas as pd
from taxcalc import Policy, Records, Calculator, Behavior, weighted_sum
c_min = 1000
esub = 0.25
behavioral_response = {2013: {"_BE_sub": [esub]}}
hirt = 0.30
lst = 129.986 # with esub=0.00: 196.813 ; with esub=0.25: 129.986
reform = {
2013: {"_II_rt5": [hirt], "_PT_rt5": [hirt],
"_II_rt6": [hirt], "_PT_rt6": [hirt],
"_II_rt7": [hirt], "_PT_rt7": [hirt],
"_LST": [lst]}
}
# current law: rt5=0.33, rt6=0.35, rt7=0.396 (with rt4=0.28)
def isoelastic_utility_function(consumption, crra):
"""
Return isoelastic utility of specified non-negative consumption value
given specified non-negative value of the coefficient of relative
risk aversion crra.
Note: consumption and crra are floats
Note: returned utility value is a float
"""
if consumption >= c_min:
if crra == 1.0:
return math.log(consumption)
else:
return math.pow(consumption, (1.0 - crra)) / (1.0 - crra)
else: # if consumption < c_min
if crra == 1.0:
tu_at_c_min = math.log(c_min)
else:
tu_at_c_min = math.pow(c_min, (1.0 - crra)) / (1.0 - crra)
mu_at_c_min = math.pow(c_min, -crra)
tu_at_c = tu_at_c_min + mu_at_c_min * (consumption - c_min)
return tu_at_c
def expected_utility(consumption, probability, crra):
"""
Return expected utility of consumption that has probability given the
specified non-negative value of constant-relative risk-aversion crra.
Note: consumption and probability are arrays; crra is a float
Note: returned expected utility value is a float
"""
utility = consumption.apply(isoelastic_utility_function, args=(crra,))
return np.inner(utility, probability)
def certainty_equivalent(exputil, crra):
"""
Return certainty-equivalent consumption for given expected utility exputil
and given constant-relative risk-aversion parameter crra of an
isoelastic utility function.
Note: exputil and crra are floats
Note: returned certainty equivalent value is a float
"""
if crra == 1.0:
tu_at_c_min = math.log(c_min)
else:
tu_at_c_min = math.pow(c_min, (1.0 - crra)) / (1.0 - crra)
if exputil >= tu_at_c_min:
if crra == 1.0:
return math.exp(exputil)
else:
return math.pow((exputil * (1.0 - crra)), (1.0 / (1.0 - crra)))
else:
mu_at_c_min = math.pow(c_min, -crra)
return ((exputil - tu_at_c_min) / mu_at_c_min) + c_min
def print_revenue(year, df1, df2):
"""
Print aggregate revenue under baseline and reform.
Note: year is int; df1 and df2 are Pandas DataFrame objects for
baseline and reform, respectively.
Note: nothing is returned.
"""
print 'Aggregate {} Tax Revenues ($billion)'.format(year)
print ' baseline reform difference'
resstr = '{} {:8.1f} {:8.1f} {:11.3f}'
baseln = weighted_sum(df1, '_payrolltax') * 1.0e-9
reform = weighted_sum(df2, '_payrolltax') * 1.0e-9
print resstr.format('paytax', baseln, reform, reform - baseln)
baseln = weighted_sum(df1, '_iitax') * 1.0e-9
reform = weighted_sum(df2, '_iitax') * 1.0e-9
print resstr.format('inctax', baseln, reform, reform - baseln)
baseln = weighted_sum(df1, '_combined') * 1.0e-9
reform = weighted_sum(df2, '_combined') * 1.0e-9
print resstr.format('alltax', baseln, reform, reform - baseln)
avginc1 = weighted_sum(df1, '_expanded_income') / df1['s006'].sum()
avginc2 = weighted_sum(df2, '_expanded_income') / df2['s006'].sum()
msg = 'NOTE: avginc1={:.2f} and avginc2={:.2f} ==> diff={:.2f}'
print msg.format(avginc1, avginc2, (avginc2 - avginc1))
def main():
"""
Contains the high-level logic of the script.
"""
# pylint: disable=too-many-locals
# specify baseline and reform Calculator objects and calculate results
calc1 = Calculator(policy=Policy(), records=Records(),
verbose=False)
calc1.calc_all()
calc2 = Calculator(policy=Policy(), records=Records(),
behavior=Behavior(), verbose=False)
calc2.policy.implement_reform(reform)
calc2.behavior.update_behavior(behavioral_response)
calc2 = Behavior.response(calc1, calc2)
# extact data from calc1 and calc2; optionaly remove records
record_columns = ['s006', '_payrolltax', '_iitax',
'_combined', '_expanded_income']
out = [getattr(calc1.records, col) for col in record_columns]
df1 = pd.DataFrame(data=np.column_stack(out), columns=record_columns)
out = [getattr(calc2.records, col) for col in record_columns]
df2 = pd.DataFrame(data=np.column_stack(out), columns=record_columns)
# tabulate aggregate 2013 tax revenues for baseline and reform
if calc2.behavior.has_response():
print 'behavioral_response=True with _BE_sub={}'.format(esub)
else:
print 'behavioral_response=False'
print 'c_min={}'.format(c_min)
print 'Reform hirt={}'.format(hirt)
print 'Reform LST={}'.format(lst)
print_revenue(2013, df1, df2)
# calculate sample-weighted probability of each filing unit
prob_raw = np.divide(df1['s006'], df1['s006'].sum())
prob = np.divide(prob_raw, prob_raw.sum()) # handle any rounding error
# calculate consumption of each filing unit under baseline and reform
# assuming consumption equals after-tax expanded income
cons1 = df1['_expanded_income'] - df1['_combined']
cons2 = df2['_expanded_income'] - df2['_combined']
# calculate expected utility of baseline and reform consumption
print 'Certainty-Equivalent After-Tax Expanded Income ($)'
print 'crra baseline reform pctdiff'
resstr = '{} {:8.2f} {:8.2f} {:9.2f}'
crra_values = [0, 1, 2, 3, 4]
for crra in crra_values:
eu1 = expected_utility(cons1, prob, crra)
eu2 = expected_utility(cons2, prob, crra)
ce1 = certainty_equivalent(eu1, crra)
ce2 = certainty_equivalent(eu2, crra)
pctdiff = 100.0 * (ce2 - ce1) / ce1
print resstr.format(crra, ce1, ce2, pctdiff)
# normal exit of main function
return 0
if __name__ == '__main__':
sys.exit(main())
@MattHJensen @feenberg @Amy-Xu @GoFroggyRun @andersonfrailey @codykallen
Thanks @martinholmer (and sorry I can't be more help with the coding side of things). These results make sense to me.
Note, however, that once we get to step two of the proposal and start incorporating changes in labor supply/avoidance/etc., it no longer makes sense to feed income directly into the utility function for evaluating changes in welfare. Such an approach implicitly assumes leisure/tax-preferred activities/etc. have no value. Instead, we can get a first-order approximation to post-reform welfare by plugging into the utility function the sum of (i) baseline after-tax income, (ii) change in static tax liability, and (ii) financing less revenue feedback. The welfare gain/loss of a proposal then emerges from the difference between (ii) and (iii). (And a fully static simulation necessarily finds no gain in the CRRA = 0 case because (ii) and (iii) are equal.) This won't change the sign of any of the results above, but it will change the magnitude.
@gleiserson said:
Note, however, that once we ... start incorporating changes in labor supply/avoidance/etc., it no longer makes sense to feed income directly into the utility function for evaluating changes in welfare. Such an approach implicitly assumes leisure/tax-preferred activities/etc. have no value.
But what if we are assuming no labor-supply response to the rate-cap reform, but instead we are assuming that the increase in after-tax income is largely saved, and hence, there is a higher rate of investment that leads to faster economic growth. If we imaging conducting our social welfare analysis in a year that is several years after the reform is first implemented, before-tax income will be higher because the faster economic growth has raised wages. In this case there is no change in leisure to evaluate. How do you proposed to compute social welfare in this case? The process you outlined in your previous comment seems to not be relevant because there is no change in labor supply. How could a limited model like Tax-Calculator tell the difference between the faster-economic-growth case and the larger-labor-supply case?
@MattHJensen @feenberg @Amy-Xu @GoFroggyRun @andersonfrailey @codykallen
Provided the incidence assumptions used in distributing the static tax change are reasonable, the methodology works the same way and generates a PDV-ish concept (provided also that the revenue offset incorporated into the analysis is PDV). In the case of a tax change that affects wages, the incidence of the tax will be partially on labor and thus a portion of the benefits/losses should be distributed to labor when the tax changes. Put differently, the proposed methodology requires economically-valid incidence assumptions for each tax used, but analyzing the welfare effects of changes in each tax does not require additional tax-specific analyses. (The cost in the intertemporal case analogous to reduced leisure in the one-period case is the cost of foregone consumption in other periods.)
Consistent with the structure of basic tax models, this methodology is designed to generate informative snapshot estimates. One could also do an analysis that more explicitly incorporates the intertemporal aspects of tax reform and thus concludes that a tax cut generating effects like those you suggest would cause welfare losses in the short term and larger welfare gains in the long term. However, a reasonable summary measure needs to reflect both effects.
I do not mean to downplay the difficulty in developing fully-general economically-valid incidence assumptions for all taxes nor the limitations of the incidence assumptions used in common tax models. (For example, most tax modeling will impose a very strong distinction between corporate tax changes and individual tax changes that may (or may not) be stronger than the distinction that should exist.) Rather, my point is that this methodology is designed to operate in a reasonable way layered on top of whatever the incidence assumptions used in any particular model are.
(The key assumption required for this methodology is consumer optimization in the baseline. If there is an optimization failure such that people are not indifferent to slight changes in a choice variable and that choice variable responds to the tax change under consideration, a different methodology would be required. One could imagine scenarios in which this might be relevant, such as the introduction of a new tax for which no avoidance efforts are currently pursued and thus for which the relevant choice variables are at a corner solution.)
@MattHJensen @feenberg @Amy-Xu @GoFroggyRun @andersonfrailey @codykallen @martinholmer
@gleiserson said:
Note, however, that once we get to step two of the proposal and start incorporating changes in labor supply/avoidance/etc., it no longer makes sense to feed income directly into the utility function for evaluating changes in welfare. Such an approach implicitly assumes leisure/tax-preferred activities/etc. have no value. Instead, we can get a first-order approximation to post-reform welfare by plugging into the utility function the sum of (i) baseline after-tax income, (ii) change in static tax liability, and (ii) financing less revenue feedback. The welfare gain/loss of a proposal then emerges from the difference between (ii) and (iii). (And a fully static simulation necessarily finds no gain in the CRRA = 0 case because (ii) and (iii) are equal.) This won't change the sign of any of the results above, but it will change the magnitude.
I've now returned to working on issue #1050. @gleiserson, Thanks for your suggested approach above.
Can you provide us with some citations in the economic research literature that describes and uses this approach? Also, what precisely is meant by "financing" and by "revenue feedback"? How would these two terms be computed person-by-person in situations where offsetting tax rate changes (rather than lump-sum taxes) were used to finance the reform?
@MattHJensen
My apologies for the severely delinquent response.
There is not much in the academic literature that closely corresponds to the proposed methodology because the proposal is designed to facilitate the analysis of incompletely specified policies, and the academic literature typically has examined fully specified tax reforms.
That said, the proposal does relate to several papers. For example, Hendren’s recent TP&E paper on the policy elasticity (http://scholar.harvard.edu/files/hendren/files/the_policy_elasticity.pdf) is dealing with many of the same conceptual issues and uses a (very loosely) similar decomposition. In addition, the approximation at the core of the proposed methodology is the same approximation implicit in Mankiw and Weinzierl’s 2006 back-of-the-envelope guide to dynamic scoring (http://www.people.hbs.edu/mweinzierl/paper/dynamicscoring.pdf) in that the results in both are true only for a differential tax change. Finally, the entire issue is closely related to the dynamic distribution analysis of the 2001/2003 tax cuts that Elmendorf, Furman, Gale, and Harris did in 2008 (https://www.brookings.edu/wp-content/uploads/2016/06/06_taxcuts_gale.pdf).
I will send separately a short illustrative derivation of the relevant theoretical result in a static labor supply model that may be helpful context in looking at any of the other references.
On the specific questions posed:
Financing as I used the term is the per-capita value of the static cost of the tax cut. Revenue feedback is the per-capita value of the offsetting revenue response (as before, this can reflect both conventional behavioral responses and macroeconomic feedback). There is no great importance in thinking about them separately, so it’s fine to think of them jointly as the per-capita net impact on the deficit. Note also (relevant for the next question) that it could be positive such as might occur in the case of a tax reform designed to be revenue-neutral on a static basis while increasing output.
In the case of offsetting rate changes, those offsetting policy changes could be combined with the original tax cut package and run through the model together. The financing/revenue feedback would then be any residual due to a partial offset (or, hypothetically, more offsets than were required).
@gleiserson, does tax-calculator now have the capabilities that you are looking for, and do you know how to use them?
I'm wondering if this issue should be closed.
cc @martinholmer
@MattHJensen, My recollection of issue #1050 is that:
there was agreement about how to conduct the normative welfare analysis when there was no behavioral response or growth response to the policy reform, but
there was not a consensus about how to conduct the normative welfare analysis when there was a behavioral or growth response to the policy reform.
Given that the second case is still a matter of active research, the Tax-Calculator CLI --ceeu
option has been coded (in taxcalc/taxcalcio.py
) to conduct a normative welfare analysis of a reform only when there is no behavioral response and no growth response to the reform.
Given the June 2017 comment on issue #1050 by @martinholmer, it seems as if #1050 should be closed. If there is a demand for normative welfare analysis when there is a behavioral or growth response to a policy reform, then a new issue can be raised to discuss how to do that.
@gleiserson (Greg Leiserson) emailed me a detailed proposal for adding proxy welfare measures to Tax-Calculator and potentially TaxBrain output tables.
Proposal for proxy welfare measures.docx
Table Shells.xlsx
I will be studying and thinking about this over the next week or so, and I will leave comments here. I'm hoping that others will find this interesting and do the same.
cc @martinholmer @feenberg @viard @andersonfrailey @Amy-Xu @GoFroggyRun @codykallen