QuickPay-Operational-Performance / Data-and-code

Data and code for econometric analysis
0 stars 0 forks source link

Alternative model for portfolio theory #90

Open JNing0 opened 1 year ago

JNing0 commented 1 year ago

On Monday's meeting, we hypothesized that we did not observe significant indirect treatment effect because some contractors have high fraction of large projects and low fraction of small projects. Shall we run the following regression on large business projects to see whether that is the case?

image

vibhuti6 commented 1 year ago

We had tried this earlier, and did not get any significant effect. The versions are archived in the code for clarity, but I can re-run them.

vibhuti6 commented 1 year ago

Here are the results -- see Tables 18 and 19.

JNing0 commented 1 year ago

Good news! The portfolio theory is back in the game :) The negative results are consistent with what we had before. (If you look at the discussion in Section 7.2, which are written for the previous results, we had acceleration for indirectly treated projects.) Our theory predicts either one so that's consistent with the theory, too. In terms of significance, if I recall correctly, our earlier result only had significance in the first and last two specifications, exactly like the latest one.

This calls for a change in the MSOM abstract.

JNing0 commented 1 year ago

To be consistent, shall we also do a similar treatment intensity regression for small business projects? Thanks!

image

vob2 commented 1 year ago

I think we should be cautious. The fact is that the result is not very robust with ln(# of small projects) and that a different metric (% of small projects) does not produce significance. At best this is weak evidence that large projects are indirectly treated. This is worth pointing out, but not emphasizing too much as our main result. Also we need to think why two different metrics might not produce similar results. I thought % of small projects would have been a better metric, but maybe it gets too flat when the number of small projects is high.

vob2 commented 1 year ago

Yes, for doing a parallel regression for small projects

JNing0 commented 1 year ago

I think we should be cautious. The fact is that the result is not very robust with ln(# of small projects) and that a different metric (% of small projects) does not produce significance. At best this is weak evidence that large projects are indirectly treated. This is worth pointing out, but not emphasizing too much as our main result. Also we need to think why two different metrics might not produce similar results. I thought % of small projects would have been a better metric, but maybe it gets too flat when the number of small projects is high.

I agree. One possibility is that we don't have much variation in the percentage metric. But to me, using number of small projects is more reasonable. Here, we are estimating the effect on an individual large project. So a higher number of concurrent small projects would present a stronger treatment intensity. The percentage, on the other hand, does not have this interpretation as it includes other large projects. Arguably, we look at how small projects affect large projects, not how large projects interact with each other.

This is different from the contract financing metric. There percentage would be more appropriate if we want to say something about the contractor's financial health.

vibhuti6 commented 1 year ago

To be consistent, shall we also do a similar treatment intensity regression for small business projects? Thanks!

image

Here are the results -- see Table 20. They are consistent with what we previously had. Also, to confirm, TreatIntensity takes the value zero for large projects and for small projects without large projects, correct?

JNing0 commented 1 year ago

Correct and great! Thank you for the fast turnaround 👍

vob2 commented 1 year ago

So, Table 18: if large projects have concurrent small projects then such large projects are accelerated, relative to large projects with fewer concurrent small projects. Is this correct? We are looking at the subsample of large projects only, we took out small projects, is that right?

Table 20: if small projects have concurrent large projects, then such small projects are accelerated, relative to small projects with fewer concurrent large projects. But small projects are still delayed as a group.

This means that there is no change in the sequence. If there were, one group (large or small) would be accelerated and another one delayed.

Large projects are accelerated if small projects that are first in a sequence finish sooner and large projects can start sooner (which we do not observe, but we observe that they finish sooner).

Small projects in general get delayed, but the ones that have concurrent large ones are accelerated. If the sequence does not change, why are they accelerated? If they are accelerated because of strategic complementarity, why does not apply to all small projects?

I usually can explain anything, but here I am not sure.

vibhuti6 commented 1 year ago

Table 18: if large projects have concurrent small projects then such large projects are accelerated, relative to large projects with fewer concurrent small projects. Is this correct?

Hi Vlad, minor correction: if large projects have concurrent small projects then such large projects are accelerated, relative to large projects with no concurrent small projects. We took out small projects.

As such, I am not fully sure how to interpret these results, especially considering that they are not consistent with other metrics. I will give it more thought. Thanks.

JNing0 commented 1 year ago

Small projects in general get delayed, but the ones that have concurrent large ones are accelerated. If the sequence does not change, why are they accelerated? If they are accelerated because of strategic complementarity, why does not apply to all small projects?

My interpretation is that the fact that there are large projects lining behind small projects provides strengthens the strategic complementarity force. So small projects with concurrent large ones are delayed less. We had same results before and this is what we say in the paper

This result agrees with the theory in \S3.3.1 and reflects the strategic complementary force strengthened by concurrent large business projects. It indicates that the spillover effect works both ways. Small projects allow concurrent large projects to be accelerated under QuickPay. In return, large projects also speed up accompanying small projects. There exists strategic complementary on the project type level, between large and small projects.

Small projects without concurrent large projects do not have this benefit. Their treatment effect is around 1% across all specifications. Thus, strategic complementarity from small projects alone is not enough for contractors to work faster under QuickPay. On average, a small project's delay increases by 1\% per quarter if it does not have concurrent large projects. The delay of a small project with concurrent large projects is about $0.6$\% less.

vob2 commented 1 year ago

That is one interpretation. But strategic complementarity should apply to all treated projects, not just the ones that have concurrent large ones. I agree that maybe if you want to expedite receiving a payment on the large project you will finish the small one that is blocking it faster.

Should the same argument apply if we look at the number of any (not just large) projects as the treatment intensity? Small projects with many concurrent other projects (small or large) should get expedited more?

This can explain why large projects get expedited. It is not that the sequence changes, but you expedite all projects in a sequence (large or small) because the sequence of payments is more valuable.

JNing0 commented 1 year ago

Should the same argument apply if we look at the number of any (not just large) projects as the treatment intensity? Small projects with many concurrent other projects (small or large) should get expedited more?

That's true. But since small projects are paid faster any way, the effect would not be so significant? If we are to test it, I think we need to look at the subset of contractors with only small projects and use number of projects as treatment intensity. This eliminates any effect from large projects.

Then we look at the subset of contractors with both small and large projects. I propose to have separate treatment intensities, one for number of concurrent small projects and one for number of concurrent large projects, to capture potentially different effects.

We can then compare the effects of small projects in these two groups of contractors?

vob2 commented 1 year ago

Worth a try to settle this question if the effort is not too great.

vibhuti6 commented 1 year ago

Hi Jie and Vlad,

I read through this discussion as well as corresponding sections in the paper a couple of times. But I am still having a hard time understanding the concept/theory here. Could you please help clarify? I have summarized my thoughts below:

Sorry if I am missing something obvious!

JNing0 commented 1 year ago

Hi Vibhuti,

The idea is that if the contractor works on its projects in serial, then it would have extra incentive to speed up small projects that are paid faster. The fact that both small and large projects are expedited does not rule out a change in sequence. The contractor may work on small projects first, finish it quickly, and then work on other large projects. If the amount of work on small projects is small, then the net effect on the large projects can still be acceleration. In this case, we can view small + large projects as a jumbo project. This jumbo project has more significant payment than the small one. Strategic complementarity force is strengthened.

The question is whether this is the case for contractors with only small projects. That's why we'd like to test it as explained above.

vibhuti6 commented 1 year ago

Thanks, Jie -- this was helpful. So, we have three groups based on the contractor's portfolio:

Group 1: Small + Large projects Group 2: Small projects only Group 3: Large projects only

Just confirming if my interpretation is correct?

JNing0 commented 1 year ago

As Vlad pointed out, one could argue that this type of strengthening for strategic complementarity applies to any project. As long as the contractor has multiple projects in serial, it always has a large jumbo project. So the more projects a contractor has, the stronger the incentive to accelerate. But it is unclear how this plays out if a contractor has only small projects all of which receive faster payment.

So the idea is to distinguish the effect with two separate regressions.

The regression model is similar to the one we used before, but with different definitions of TreatIntensity.

vibhuti6 commented 1 year ago

Thanks for clarifying! Here are the results, see Tables 21 and 22. Sign for the effect of small projects is opposite in the two categories, with one of the regressions showing a weak effect overall.

JNing0 commented 1 year ago

Thank you. The results confirm that the acceleration of small project indeed originated from concurrent large projects. Absent of large projects, the effect is reversed. Though I don't know how to explain this reversal.

JNing0 commented 1 year ago

@vibhuti6 I think for the regression on contractors with only small projects (Table 22), we should also have a Treat variable. The idea here is the same as the regression for Table 21. Treat captures the baseline (intercept) and the continuous intensity captures the slope.

vibhuti6 commented 1 year ago

We didn't do this in the corresponding regression for large businesses though -- why would the models be different for the two categories?

image

vibhuti6 commented 1 year ago

There's no qualitative change in results when the intercept is included -- see updated Tables 18 and 22 here. But we should be consistent in our models if we put this in the paper.

JNing0 commented 1 year ago

I got confused. I just checked the literature and confirmed that we should not have Treat alongside TreatIntensity (e.g., see this AER paper). So let's use the model for large businesses. Thanks for spotting this! Could you rerun the results so I can add them to the paper?

vibhuti6 commented 1 year ago

Thanks for checking, I have updated the files.

JNing0 commented 1 year ago

Thank you. But Table 21 still has Treat and Treat x Post. They should be removed too.

vibhuti6 commented 1 year ago

I got confused. I just checked the literature and confirmed that we should not have Treat alongside TreatIntensity (e.g., see this AER paper). So let's use the model for large businesses. Thanks for spotting this! Could you rerun the results so I can add them to the paper?

I was giving this some more thought. Which equation in the Duflo paper are we referring? Is it Equation 9 where "P_j" is used to measure intensity? If yes, then that also has region fixed effects (\alpha_j) and so they don't need an intercept term.

Another paper on treatment intensity in DID is this one. See Equation (2) --- they have all time and unit fixed effects and hence have omitted the intercept term. A more recent paper also starts with a similar specification:

Screen Shot 2023-02-22 at 6 20 19 PM

I think we need to include the intercept terms in all regressions.

JNing0 commented 1 year ago

Thanks, Vibhuti. My main question is whether to have Treat x Post in addition to TreatIntensity x Post. All papers seem to agree on that, i.e., no Treat x Post, only TreatIntensity x Post. So we are good there.

As for including Treat as an intercept term, I am indifferent. But since we have TreatIntensity in the interaction term, we should also have it in a stand-alone term. An object-level fixed effect, which exists in all the examples we found, eliminates that need, but we still need it as we don't have a project-level fixed effect.

So, to sum up, we should either use the model below, or add a Treat intercept. I am inclined towards the model below for simplicity. But we can also run regressions with the Treat intercept. I don't think it will change our results.

image

JNing0 commented 1 year ago

Actually after thinking more about it, I now believe we should NOT include Treat in the model. The reason is that with TreatIntensity, the number "1" now has meaning. The treatment intensity may go from, say, 0.1, to 5. And for controls, the treatment intensity =0. Then having a Treat indicator effectively elevates all treatment intensity by one. Mathematically, formulations with and without Treat are equivalent. But adding Treat intercept makes the results hard to interpret. This is probably also why in the interaction term, we only have TreatIntensity x Post.

So could you rerun regressions for Table 21, with Treat and Treat x Post removed? Thanks!

vibhuti6 commented 1 year ago

Thanks, Jie -- I agree it may be better to have only one of Treat or TreatIntensity in the model if we put this in the paper.

I have updated the results, please see Tables 17, 18, and 19 in Section 17 here -- I have cleaned up the code so the table numbers are different. Please also check the definitions in that section (to confirm).

Also, we now get same (positive) effect for concurrent small projects in the two cases, but the effect of concurrent large projects is very weak. Not sure yet how to interpret this, but I will think more about it.