Watts-College / cpp-524-fall-2021

https://watts-college.github.io/cpp-524-fall-2021/
1 stars 0 forks source link

Campbell Score-Selection #18

Open danafuller opened 2 years ago

danafuller commented 2 years ago

In our instructions it defines being guilty for omitted variable/selection bias as non equivalent treatment and control groups. In chapter 21 there was great care taken to randomize those groups. However, the fact that they are randomizing a volunteer population doesn't completely remove the bias. Can someone weigh in on if you would give a +1 or a +0 in that scenario or does it come down to the randomization and group equivalency only?

lecy commented 2 years ago

However, the fact that they are randomizing a volunteer population doesn't completely remove the bias.

I'm not sure what you mean by volunteer population?

You might be confusing internal and external validity? Internal validity is about whether the program impact estimate is accurate. Or stated differently, if the program was implemented another time with a similar population can we expect to achieve similar results?

External validity is whether we can generalize outside of the study population.

In this instance, the students (parents) that opt into school choice are different from regular public school students. If we identify significant program effects and we have high internal validity we can claim that other kids that opt for school choice in Milwaukee should experience similar outcomes.

We CANNOT say, however, that if we mandated school choice for students that did not self-select into the program we would experience similar gains. That is a question of generalizability of the program, and more research would be needed to establish that.

So the study in CH21 can have high internal validity, i.e. minimal bias in the estimated student gains above and beyond the counterfactual group. But still be cautious in claiming the program would work the same for every student.

The Campbell Score is specifically measuring internal validity, not external validity or generalizability.

Does that make sense?

danafuller commented 2 years ago

Yes that makes complete sense. That is an important distinction between internal and external validity that I failed to discern. Thank you for that clarification!

On Tue, Nov 9, 2021, 1:49 PM Jesse Lecy @.***> wrote:

However, the fact that they are randomizing a volunteer population doesn't completely remove the bias.

I'm not sure what you mean by volunteer population?

You might be confusing internal and external validity? Internal validity is about whether the program impact estimate is accurate. Or stated differently, if the program was implemented another time with a similar population can we expect to achieve similar results?

External validity is whether we can generalize outside of the study population.

In this instance, the students (parents) that opt into school choice are different from regular public school students. If we identify significant program effects and we have high internal validity we can claim that other kids that opt for school choice in Milwaukee should experience similar outcomes.

We CANNOT say, however, that if we mandated school choice for students that did not self-select into the program we would experience similar gains. That is a question of generalizability of the program, and more research would be needed to establish that.

So the study in CH21 can have high internal validity, i.e. minimal bias in the estimated student gains above and beyond the counterfactual group. But still be cautious in claiming the program would work the same for every student.

The Campbell Score is specifically measuring internal validity, not external validity or generalizability.

Does that make sense?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://urldefense.com/v3/__https://github.com/Watts-College/cpp-524-fall-2021/issues/18*issuecomment-964531109__;Iw!!IKRxdwAv5BmarQ!KsP26jAwAQezoYeoOOGbzqXZimda8I6QPaXHPoZfTl7fjPcyODrkg_KqRIR-CFhB$, or unsubscribe https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AVKGRUVJKQGTAY4F2IJ2DKTULGCO5ANCNFSM5HWEHGIA__;!!IKRxdwAv5BmarQ!KsP26jAwAQezoYeoOOGbzqXZimda8I6QPaXHPoZfTl7fjPcyODrkg_KqRN9PVIij$ . Triage notifications on the go with GitHub Mobile for iOS https://urldefense.com/v3/__https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675__;!!IKRxdwAv5BmarQ!KsP26jAwAQezoYeoOOGbzqXZimda8I6QPaXHPoZfTl7fjPcyODrkg_KqRMK_HVYh$ or Android https://urldefense.com/v3/__https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign*3Dnotification-email*26utm_medium*3Demail*26utm_source*3Dgithub__;JSUlJSU!!IKRxdwAv5BmarQ!KsP26jAwAQezoYeoOOGbzqXZimda8I6QPaXHPoZfTl7fjPcyODrkg_KqRG197aOZ$.

droach7 commented 2 years ago

Related to this-- I was also a little confused as how students should go about assessing whether non-random attrition is present in Chp 20's study.

For selection bias we already know that the two groups are different as the authors specify that the Choice and MPS parents inherently varied on marital status, family size, and highest-education level achieved by parents (pg 303-304). The authors are comparing Choice Students to MPS Students through non-random assignment. However, since they are comparing change in test scores between the two groups, it would appear that they are using a difference-in-difference model, which does not mandate group equivalence.

However for attrition, they specify that Choice attrition rates were: 46%, 35%, 31%, 27%, and 28% across the five years, and later claim that "attrition from the Choice Program was comparable to the range of mobility between schools in MPS" (pg. 311). They do not clarify if the individuals who stayed in each group (Choice vs MPS) and those who left varied, which sets off an alarm bell since unless we can prove that they are statistically equivalent we can't rule out non-random attrition. Tables 5A-5E do provide demographic information on the Choice Enrollment and Attrition groups.

@lecy I know for Chp 7 last week you said it was possible to calculate t-stats since they provided the sample mean and standard deviation. Chp 20 Table 5 presents group demographics as percentages and only provides the groups means and group sample size, but not the standard deviations. Is there another way to calculate p-values from these tables to determine if non-random attrition occurred that I am not thinking of? Or do you want us to base our answers off of the authors' written statements?

Clarification would be appreciated.

lecy commented 2 years ago

Side note - if the groups are already not equivalent attrition is not as important because we know with or without it the groups will remain non-equivalent.

But for the CS item, the more important thing is that they report non-trivial rates of attrition but they are not (1) doing tests to see if the people that leave are different than the people that stay, or (2) not examine if group equivalence has changed after attrition like they do in CH21.

Additionally, they don’t provide sufficient information for the reader to calculate contrasts on their own. Without standard deviations, standard errors or t-values it’s not possible to calculate contrasts.

I don’t think they sufficiently make the case that attrition is non-random and would not change the estimate.