Closed barnabytprowe closed 11 years ago
This is puzzling. There is no random number usage in that function. My guess is that it might be from the parallel nosetests somehow, although I don't have ideas specifically about what the bug might be. But you could check whether you ever get an error when doing scons tests -j1
.
This issue does indeed appear to be fixed by the code on #426. I ran scons tests
13 times with no failures, which assuming that failure is described by a Binomial distribution with probability p (and a flat prior on p) gives Prob(p <= 0.1) = 0.77, with the following posterior distribution
(Don't worry, I didn't waste any actual time on this overanalysis, I have a little piece of code, reproduced below, which I wrote for another project that produces this output:
#!/usr/bin/env python
from sys import argv
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
NTABLE = 10000 # Number of entries for tabulated p likelihood in range [0., 1.)
print "Welcome to pFail.py (Binomial p inference tool)"
print "usage: pFail.py nfails ntrials ptolerance"
print ""
if len(argv) != 4:
exit(1)
nfails = int(argv[1])
ntrials = int(argv[2])
ptolerance = float(argv[3])
p = np.arange(NTABLE, dtype=float) / float(NTABLE)
prior = np.ones(NTABLE) / float(NTABLE) # uniform prior, could use another if desired
# Now tabulate likelihood function (starts as a list, then converted to array)...
# Handle first element (p = 0.) without scipy function, it complains!
if nfails == 0:
likelihood = [1.,]
else:
likelihood = [0.,]
# Calculate rest of likelihood using a list comprehension to be slightly quicker
restoflike = [scipy.stats.binom.pmf(nfails, ntrials, p[i]) for i in xrange(1, NTABLE)]
likelihood.extend(restoflike)
likelihood = np.array(likelihood)
# Normalize and get posterior from prior and likelihood
posterior = (prior * likelihood)
posterior /= posterior.sum()
# Calculate and print results
print "Maximum-Likelihood p estimate = "+str(float(nfails) / float(ntrials))
print "Bayesian expectation E(p) = "+str((posterior * p).sum())
print "Prob(p <= ptolerance) = "+str((posterior[p <= ptolerance]).sum())
print ""
plt.plot(np.arange(NTABLE) / float(NTABLE), posterior * float(NTABLE))
plt.xlabel('p')
plt.ylabel('pdf(p)')
plt.show()
Anyway, looks good to me Mike, we should close this when #426 is merged!)
lol. Thanks Barney.
Hi all,
After a thorough
rm -f .scon*
andscons -c
etc. before runningscons
, I get intermittent "high-precision failures" (if that makes sense) in test comparisons intest_config_gsobject.py
on an older, 32-bit linux machine. This is my issue in which to sort this out (I volunteer): it looks to me like one of those cases where we need to fix a random number seed to make results strictly repeatable.Here is some example output from running
scons tests
six times on the offending 32 bit system:One final thing is that I plan to continue to ignore the
bin/test_main: error while loading shared libraries: libgalsim.so.0: cannot open shared object
error. This has been happening on this system for a long time and I don't know why, nor right now do I think it is right to prioritise fixing (since it doesn't affect actual GalSim tasks)...