Open quarkmatter opened 3 years ago
It looks like you're using an old version of multipy, as it hasn't been updated on pip. Try installing manually and seeing if that fixes the issue:
git clone https://github.com/puolival/multipy.git
cd multipy/
ipython setup.py install
Then run your command again. If nothing else, try copying-and-pasting the code from source to define it locally in your program. This is what I did and it works for me:
import numpy as np
from scipy.interpolate import UnivariateSpline
def qvalue(pvals, threshold=0.05, verbose=True):
"""Function for estimating q-values from p-values using the Storey-
Tibshirani q-value method (2003).
Input arguments:
================
pvals - P-values corresponding to a family of hypotheses.
threshold - Threshold for deciding which q-values are significant.
Output arguments:
=================
significant - An array of flags indicating which p-values are significant.
qvals - Q-values corresponding to the p-values.
"""
"""Count the p-values. Find indices for sorting the p-values into
ascending order and for reversing the order back to original."""
m, pvals = len(pvals), np.asarray(pvals)
ind = np.argsort(pvals)
rev_ind = np.argsort(ind)
pvals = pvals[ind]
# Estimate proportion of features that are truly null.
kappa = np.arange(0, 0.96, 0.01)
pik = [sum(pvals > k) / (m*(1-k)) for k in kappa]
cs = UnivariateSpline(kappa, pik, k=3, s=None, ext=0)
pi0 = float(cs(1.))
if (verbose):
print('The estimated proportion of truly null features is %.3f' % pi0)
"""The smoothing step can sometimes converge outside the interval [0, 1].
This was noted in the published literature at least by Reiss and
colleagues [4]. There are at least two approaches one could use to
attempt to fix the issue:
(1) Set the estimate to 1 if it is outside the interval, which is the
assumption in the classic FDR method.
(2) Assume that if pi0 > 1, it was overestimated, and if pi0 < 0, it
was underestimated. Set to 0 or 1 depending on which case occurs.
Here we have chosen the first option, since it is the more conservative
one of the two.
"""
if (pi0 < 0 or pi0 > 1):
pi0 = 1
print('Smoothing estimator did not converge in [0, 1]')
# Compute the q-values.
qvals = np.zeros(np.shape(pvals))
qvals[-1] = pi0*pvals[-1]
for i in np.arange(m-2, -1, -1):
qvals[i] = min(pi0*m*pvals[i]/float(i+1), qvals[i+1])
# Test which p-values are significant.
significant = np.zeros(np.shape(pvals), dtype='bool')
significant[ind] = qvals<threshold
"""Order the q-values according to the original order of the p-values."""
qvals = qvals[rev_ind]
return significant, qvals
Same problem for me. Updating numpy did not solve the problem
Copy-paste the above snippet did not work either, but maybe this time it was due to pathologic p-values in my case
Your software uses xrange() instead of range(), but xrange() has been discontinued in python 3. Heres a screen shot of the error i get when trying to call your qvalues() function