As discussed in the mattermost channel, we lose a bit of signal yield (eg 14% in SL @ 500 GeV) if we apply a flat SF in order to account for leptonic tau decays in the simulation. Instead, we should scale down only the W->tau nu events because the softer leptons for tau decays are less likely to pass our analysis cuts. In other words, instead of scaling everything down, we should scale down only the portion of our signal that is less likely to contribute to our SR.
Action items:
create a branch for the number of taus that originate from a W or a Z decay. This information is not recoverable from our current Ntuples, which means that we have to rerun post-processing for 2016 resonant signal samples;
given that an event can have up to N such taus (N = 1 in SL and = 2 in DL), apply a SF of 0.3521^N / r to every signal event in the event loop, where r = 0.78403 for SL and r = 0.622253 for DL (see these slides for more details).
As discussed in the mattermost channel, we lose a bit of signal yield (eg 14% in SL @ 500 GeV) if we apply a flat SF in order to account for leptonic tau decays in the simulation. Instead, we should scale down only the W->tau nu events because the softer leptons for tau decays are less likely to pass our analysis cuts. In other words, instead of scaling everything down, we should scale down only the portion of our signal that is less likely to contribute to our SR.
Action items: