Closed andersonfrailey closed 5 years ago
I got $96B when also raising the rates on capital gains: https://www.ospc.org/taxbrain/38482/
@andersonfrailey, for those with incomes that large, I suspect many of them have incomes primarily from investments. If that's the case, much of their incomes will be subject to the lower tax rates on long-term capital gains and qualified dividends, based on _CG_*
.
@andersonfrailey, Did the responses from Max and Cody answer your question?
@martinholmer yes, closing.
Anyone know how Aparna Mathur got only $16.3 billion using a static Tax-Calculator analysis in her Bloomberg Tax piece?
With ordinary income as the tax base, which is defined as wages, salaries, interest, and business income, implementing a 70 percent tax above $10 million would increase the tax revenue generated (assuming no behavioral response) in 2019 by $16.3 billion.
She appears to be including capital gains:
I begin by using total taxable income reported by those earning above $10 million as the tax base. The taxable income concept used here includes ordinary income, as well as capital gains and dividends income. While we apply the tax rate and the ETI to this income, it is important to remember that capital gains and dividends are taxed at different rates and the elasticity associated with that income has in some cases been estimated to be higher than 0.25 (Bogart and Gentry, 1993).
@MaxGhenis, I'm not sure, but the best approach is probably to email her RA to ask about the reform dictionary (or JSON) she used.
@MaxGhenis, I reached out to Aparna's RA, Erin Melly, and here is what she said:
- Define ordinary income as 'e00200' + 'e00300' + 'e00900'
- Only look at those with ordinary income above $10 million
- Find the total of ordinary income that is subject to this tax by subtracting $10 million from all of these earners’ ordinary income.
- Find the weighted sum of this (about $49 billion)
- We did the rest in Excel because we did not want to use the behavioral response package in taxcalc. That package applies the ETI to the entire population and we wanted to only apply the response to top earners. So, you simply apply the rate of 37% to the $49 billion to get tax revenue under current law tax revenue. Then apply a rate of 70% to get the tax revenue under the proposal. The difference of these is about $16 billion.
She also sent this excel spreadsheet (the calculation in question is on row 16). Happy to put you guys in touch if you have any more questions.
@Peter-Metz quoted Erin Melly in issue #2189:
- We did the rest in Excel because we did not want to use the behavioral response package in taxcalc. That package applies the ETI to the entire population and we wanted to only apply the response to top earners.
More proof that the people asking for more documentation have not read the existing documentation.
There is even a recipe in the Cookbook that shows how to apply different elasticities to different subgroups of the population using the quantity_response
utility function.
@Peter-Metz mentioned from Erin Melly, that they defined ordinary income as e00200 + e00300 + e00900
. They appear to be omitting e02000
(Sch E income, which includes partnership, S corporation, rental royalty); this income is definitely relevant for high earners, as it's highly concentrated among high-income filers.
I see, Aparna leads with the ordinary income calculation, but later on includes the taxable income calculation more analogous to what TaxBrain would show:
Applying a 70 percent tax rate to income over $10 million and assuming no behavioral response, generates an additional $92.8 billion in tax revenue. However, assuming an elasticity of these individuals of 0.25 only generates $67 billion in revenue from a 70 percent tax—approximately $25 billion less than the static estimation. Furthermore, an even stricter behavioral assumption of an elasticity of 0.6 only generates $30.9 billion additional in tax revenue.
@martinholmer said
There is even a recipe in the Cookbook that shows how to apply different elasticities to different subgroups of the population using the
quantity_response
utility function.
Could you point me to this? The behresp
documentation (https://pslmodels.github.io/Behavioral-Responses) points to taxcalc
recipe 2 (https://pslmodels.github.io/Tax-Calculator/cookbook.html#recipe02, https://pslmodels.github.io/Tax-Calculator/recipe02.py.html), which doesn't invoke quantity_response
:
behresp_json = '{"BE_sub": {"2018": 0.25}}'
Oh I see, it's in taxcalc
recipe 4. I'm confused why this works when it doesn't import behresp
though, I thought taxcalc
no longer had behavioral functionality?
Looking through the quantity_response
function and the recipe, I'm still unclear on how to use these together to do something like what Aparna and Erin did, and what's described in the behresp
documentation:
The Behavioral-Responses logic assumes that the parameters apply to all filing units. If you want to estimate responses where the value of the parameters vary across (say, earnings) groups, you can use the Tax-Calculator quantity_response function. A recipe for doing this is contained in the Tax-Calculator Cookbook. That recipe simply estimates the responses. But the techniques used in the Behavioral-Responses response function can be used to apply the estimated responses to the post-reform Tax-Calculator object and recompute tax liabilities, producing tax liability estimates that include the partial-equilibrium effects of the estimated behavioral responses.
Personally it'd seem more intuitive to move quantity_response
to behresp
, but either way I'd find it really helpful to have another recipe that combines the functionalities to do this kind of task (this exact task might be a simple enough example), maybe with some extra detail on what the function arguments represent.
@martinholmer said:
More proof that the people asking for more documentation have not read the existing documentation. There is even a recipe in the Cookbook that shows how to apply different elasticities to different subgroups of the population using the
quantity_response
utility function.
Since people are not reading these examples, do you think it would be helpful to rethink how these examples are presented? People who are less familiar with Python may be intimidated when they are sent to a directory of Python files and told that that's part of the documentation. Other projects that support users with a large variance in Python skills and experience such as Pandas present their API usage documentation in more easy to read formats.
If we decided to do this, we could expand the read-the-docs site that we already have to include the re-formatted recipe files. I think this would be a matter of converting them to Markdown or RST files. There are also options for testing code in docs like pytest doctest.
@hdoupe said in issue #2189:
People who are less familiar with Python may be intimidated when they are sent to a directory of Python files and told that that's part of the documentation.
No doubt the documentation can be improved, but anybody that reads the documentation is NOT "sent to a directory for Python files and told that that's part of the documentation." They are sent to an HTML document, the Cookbook, with plenty of prose and explanation and links to the scripts and links to the expected results of the scripts.
@martinholmer it was me, not Max, with the comment on the docs.
@hdoupe said in issue #2189:
it was me, not Max, with the comment on the docs.
Right. @MaxGhenis, sorry for the confusion about who posted the comment.
Is there a page with links to the various github, web, dropbox, zotero, etc sites that are part of the project? If so, what is the URL?
dan
On Thu, 7 Feb 2019, Henry Doupe wrote:
@martinholmer said:
More proof that the people asking for more documentation have not read the existing documentation. There is even a recipe in the Cookbook that shows how to apply different elasticities to different subgroups of the population using the quantity_response utility function.
Since people are not reading these examples, do you think it would be helpful to rethink how these examples are presented? People who are less familiar with Python may be intimidated when they are sent to a directory of Python files and told that that's part of the documentation. Other projects that support users with a large variance in Python skills and experience such as Pandas present their API usage documentation in more easy to read formats.
If we decided to do this, we could expand the read-the-docs site that we already have to include the re-formatted recipe files. I think this would be a matter of converting them to Markdown or RST files. There are also options for testing code in docs like pytest doctest.
? You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVboftdQyhPUb1e0_QvguX5tSY8Buks5vLERtgaJpZM4Z8ZHe.gif]
@martinholmer Sorry, I didn't realize that there was a coobook.html
file. I think my primary critique is still valid: there are better ways to present documentation that contains large amounts of code. I'm interested in creating a prototype demonstrating how these new docs could work. A small scale prototype could be a helpful starting point for deciding how we'd like the docs to look in the long run. Does that seem sensible to you?
@feenberg Most of these links can be found in Tax-Calculator's README.md file.
@hdoupe I'd love to see Jupyter notebooks as documentation, which the pandas
documentation uses and which we've discussed in https://github.com/PSLmodels/Tax-Calculator/issues/1151. There are some sophisticated ways to version control these, but it would be straightforward as a first step to treat them the same as the current recipes, i.e. checking that they've been run before submitting a PR.
Whether in the current format, Jupyter, or some other form, I'm still unsure about how to combine the functionality to achieve this particular task, so I think there's opportunity for more robust content as well.
@hdoupe said in issue #2189:
Sorry, I didn't realize that there was a
coobook.html
file. I think my primary critique is still valid: there are better ways to present documentation that contains large amounts of code. I'm interested in creating a prototype demonstrating how these new docs could work. A small scale prototype could be a helpful starting point for deciding how we'd like the docs to look in the long run. Does that seem sensible to you?
Any suggestions for documentation improvement would be welcome.
But remember what @feenberg just said in this conversation:
Most of these links can be found in Tax-Calculator's
README.md
file.
He's absolutely correct about that. In fact, if you begin reading there, you are guided through the documentation reading step by step. And there is even a different set of documents for users and contributors.
Maybe you're right about the format of the documentation being important, but I'm skeptical about that being the main problem.
@MaxGhenis said in issue #2189:
Oh I see, it's in Cookbook recipe 4.
Yes, that's correct.
I'm confused why this works when it doesn't import
behresp
though, I thoughttaxcalc
no longer had behavioral functionality?
Because the quanity_response
function is a utility function that is part of Tax-Calculator. The quanitity_response
function has nothing to do with the Behavioral-Responses repository.
@MaxGhenis said in issue #2189:
Personally it'd seem more intuitive to move
quantity_response
tobehresp
, but either way I'd find it really helpful to have another recipe that combines the functionalities to do this kind of task (this exact task might be a simple enough example), maybe with some extra detail on what the function arguments represent.
The quantity_response
function's docstring is pretty extensive. Is there something you're wondering about that's unclear in the docstring?
What do others think about the repository location of the quantity_response
function? @MaxGhenis is suggesting it be moved from Tax-Calculator to Behavioral-Responses, which is not an unreasonable suggestion.
@MattHJensen @feenberg @derrickchoe @codykallen @andersonfrailey @hdoupe
Is there a JSON reform file for the AOC reform being discussed in issue #2189? How did the AEI researchers characterize the reform in Tax-Calculator?
@andersonfrailey @Peter-Metz @codykallen @MaxGhenis @MattHJensen
There hasn't been any substantive conversation in Tax-Calculator issue #2189 for almost three weeks, so I'm going to close it. If people have new questions or results on analyzing the implications of a 70% top marginal income tax rate, please open a new issue.
I'm analyzing the recently proposed 70% tax on income above $10 million and getting an interesting result that I can't figure out the reason behind. The rate increase applies to both personal and pass through income. TaxBrain results can be seen here and they align with the results I got using the Python API.
In the PUF, there is $278,249,575,876.73 in taxable income above $10 million. In the reform, I create an income bracket for all taxable income above $10 million and set the rate for that bracket at 70%. In 2018 this results in an increase in tax liabilities of about $29 billion. What doesn't seem to square, is that the difference between 70% and 37% of $278,249,575,876.73 is closer to $92 billion.
I understand that when you account for the factors like deductions, credits, AMT, etc. the numbers won't line up exactly, but I would expect them to be closer than they are. Any ideas as to what could explain the gap?
@martinholmer @MattHJensen @Peter-Metz