posterior / distributions

Low-level primitives for collapsed Gibbs sampling in python and C++
BSD 3-Clause "New" or "Revised" License
16 stars 8 forks source link

Numpy deprecation test suite failure on Ubuntu 16.04 #15

Closed fsaad closed 6 years ago

fsaad commented 6 years ago

Using make all results in two failures in the distributions test suite:

======================================================================
ERROR: distributions.tests.test_util.test_scores_to_probs
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/venv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/posterior/distributions/distributions/tests/test_util.py", line 38, in test_scores_to_probs
    probs = scores_to_probs(scores)
  File "/posterior/distributions/distributions/util.py", line 34, in scores_to_probs
    probs = numpy.exp(scores, out=scores)
TypeError: ufunc 'exp' output (typecode 'd') could not be coerced to provided output parameter (typecode 'l') according to the casting rule ''same_kind''

======================================================================
FAIL: distributions.tests.test_models.test_add_remove('lp.models.niw',)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/venv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/posterior/distributions/distributions/tests/test_models.py", line 104, in test_one_model
    test_fun(module, EXAMPLE)
  File "/posterior/distributions/distributions/tests/test_models.py", line 246, in test_add_remove
    err_msg='p(x1,...,xn) != p(x1) p(x2|x1) p(xn|...)')
  File "/posterior/distributions/distributions/tests/util.py", line 120, in assert_close
    assert_less(diff, tol * norm, msg)
AssertionError: p(x1,...,xn) != p(x1) p(x2|x1) p(xn|...) off by 0.163564845877% = 0.00525343418121
-------------------- >> begin captured stdout << ---------------------
example 1/4
example 2/4
example 3/4
p(x1,...,xn) != p(x1) p(x2|x1) p(xn|...)
actual = -1.10329115391
expected = -1.10854458809

--------------------- >> end captured stdout << ----------------------

----------------------------------------------------------------------
Ran 292 tests in 55.957s

FAILED (SKIP=15, errors=1, failures=1)
make: *** [test_cy] Error 1

The numpy error in distributions.tests.test_util.test_scores_to_probs is due to deprecation: https://github.com/numpy/numpy/issues/6464

Not sure why the probabilities are failing.

fsaad commented 6 years ago

For the test FAIL: distributions.tests.test_models.test_add_remove('lp.models.niw',) it seems that other models do not return an exact match for the marginal equaling the product of conditions. I'm not sure on what basis the threshold are declared, or whether it makes sense to either report an xfail or adjust the threshold slightly.

fritzo commented 6 years ago

Looks like a nuisance random failure to me. I'd recommend either twiddling a random number seed or slightly increasing a threshold.

fsaad commented 6 years ago

It seems that lp.models.niw is already known to be buggy, should have checked these issues:

https://github.com/posterior/distributions/issues/6 https://github.com/posterior/loom/issues/4

PR #16 addresses the numpy error.

fsaad commented 6 years ago

Fixed by https://github.com/posterior/distributions/commit/173c55d304ec577a52b13581eab35e5200b23f7f