choderalab / dispensing-errors-manuscript

IPython notebook to accompany dispensing errors manuscript
http://choderalab.org
1 stars 4 forks source link

Should the bias be so high without dilution effect? #37

Closed sonyahanson closed 9 years ago

sonyahanson commented 9 years ago

Line 32 here: https://github.com/choderalab/dispensing-errors-manuscript/blob/master/notebooks/echo-vs-tips.ipynb

sonyahanson commented 9 years ago

I had to change some things around, due to the IC50 vs. Ki issue, but because the echo bias is still low, I was confident I had done this okay (despite pretty annoyingly manually). Thought we had gone over this in issue #17, but it's possible we weren't so careful.

See line 27

#define the function 'competitive_inhibition_IC50' using this equation
def competitive_inhibition_IC50(substrate_concentration, inhibitor_concentration, enzyme_concentration, IC50, Km):
    V0_over_Vmax = substrate_concentration / (Km*(1 + inhibitor_concentration/(IC50/(1 + substrate_concentration/Km))) + substrate_concentration)
    return V0_over_Vmax

and line 28:

def fit_ic50(inhibitor_concentrations, activities, IC50_guess):
    def objective(inhibitor_concentrations, IC50):
        activities = np.zeros([ndilutions], np.float64)
        for i in range(ndilutions):
            activities[i] = competitive_inhibition_IC50(substrate_concentration, inhibitor_concentrations[i], enzyme_concentration, IC50, Km)
        return activities

    import scipy.optimize
    [popt, pcov] = scipy.optimize.curve_fit(objective, inhibitor_concentrations, activities, p0=[IC50_guess])

    return popt[0]

and line 30 now has:

 IC50s[replicate] = fit_ic50(ideal_concentrations, activities, true_Ki*(1 + substrate_concentration/Km))

and line 31:

for (i, Ki) in enumerate(Kis):
    IC50s = robot_IC50s(Ki)    
    pIC50s = np.log10(IC50s)
    pIC50_true = np.log10(Kis[i]*(1 + substrate_concentration/Km))
    genesis_pIC50_bias[i] = pIC50s.mean() - pIC50_true;
    genesis_pIC50_CV[i] = pIC50s.std() / abs(pIC50s.mean())  

These lines are repeatedly periodically due to not having time to clean this up, really.

sonyahanson commented 9 years ago

Was looking through this, and can't find any problems... Would be happy to be proven wrong, though.

jchodera commented 9 years ago

Previously, we found that omitting the step where we transfer a small volume of assay mix (10 uL) + compound dilution (2 uL) via robot_dispense eliminated much of the bias. When we include it, even without the dilution effect, we see a decent amount of bias. I"m not quite sure why this is bias, and not random error, but it seems to agree without earlier findings.

We might see if this disappears if we increase the assay mix and compound dilution volumes---e.g. (100 uL and 20 uL).

jchodera commented 9 years ago

Oh, I think I might have figured this out: We compute ideal_concentrations for the dilution series, but currently don't correct these for the final robot_dispense step where we mix 2 uL from the dilution series with 10 uL of assay mix. We need to multiply ideal_concentrations by compound_volume / (compound_volume + mix_volume) at that point.

sonyahanson commented 9 years ago

Okay. Changed ideal_concentrations to ideal_concentrations * ( compound_volume/(compound_volume + mix_volume) ).

Bias is now small again for tips without dilution effect:

bias-rough

This is what the final graph looks like, tips without dilution effect essentially overlaps with without bias: shift

sonyahanson commented 9 years ago

And regarding the CV question, they are different (at least after this fix): cv-compare

sonyahanson commented 9 years ago

Might be a good idea to change the scale on that figure in this light.

sonyahanson commented 9 years ago

This stuff is now available in new ipython notebooks (to replace current ones, if they pass muster) in #42 .

jchodera commented 9 years ago

Thanks!