Open zaccharieramzi opened 5 years ago
Could you please post the whole call stack so we can evaluate the error in depth? Also, could you show us your conda list
(or equivalently pip freeze
)?
Sure.
My pip freeze
output:
astroid==2.2.5
atomicwrites==1.3.0
attrs==19.1.0
backcall==0.1.0
bleach==3.1.0
certifi==2019.3.9
chardet==3.0.4
coverage==4.5.3
coveralls==1.7.0
cycler==0.10.0
decorator==4.4.0
defusedxml==0.5.0
docopt==0.6.2
entrypoints==0.3
future==0.17.1
idna==2.8
ipykernel==5.1.0
ipython==7.4.0
ipython-genutils==0.2.0
ipywidgets==7.4.2
isort==4.3.16
jedi==0.13.3
Jinja2==2.10
jsonschema==3.0.1
jupyter==1.0.0
jupyter-client==5.2.4
jupyter-console==6.0.0
jupyter-core==4.4.0
kiwisolver==1.0.1
lazy-object-proxy==1.3.1
MarkupSafe==1.1.1
matplotlib==3.0.3
mccabe==0.6.1
mistune==0.8.4
more-itertools==7.0.0
nbconvert==5.4.1
nbformat==4.4.0
notebook==5.7.7
numpy==1.16.2
-e git+git@github.com:zaccharieramzi/odl.git@0c21c8ab59300bbc6e23439b1abf7d4e2a7b8153#egg=odl
packaging==19.0
pandocfilters==1.4.2
parso==0.3.4
pathlib2==2.3.3
pexpect==4.6.0
pickleshare==0.7.5
pluggy==0.9.0
prometheus-client==0.6.0
prompt-toolkit==2.0.9
ptyprocess==0.6.0
py==1.8.0
pyFFTW==0.11.1
Pygments==2.3.1
pylint==2.3.1
pyparsing==2.3.1
pyrsistent==0.14.11
pytest==4.3.1
python-dateutil==2.8.0
PyWavelets==1.0.2
pyzmq==18.0.1
qtconsole==4.4.3
requests==2.21.0
scipy==1.2.1
Send2Trash==1.5.0
six==1.12.0
terminado==0.8.1
testpath==0.4.2
tornado==6.0.2
tqdm==4.31.1
traitlets==4.3.2
typed-ast==1.3.1
urllib3==1.24.1
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==3.4.2
wrapt==1.11.1
And the call stack (only for one failure otherwise it's too big):
______________________________________________________________ test_L1_norm[ space=power_space_unif_discr - sigma=10 - tspace_impl='numpy' ] _______________________________________________________________
space = ProductSpace(uniform_discr(0.0, 1.0, 7), 2), sigma = 10.0
def test_L1_norm(space, sigma):
"""Test the L1-norm."""
sigma = float(sigma)
func = odl.solvers.L1Norm(space)
x = noise_element(space)
# Test functional evaluation
expected_result = np.abs(x).inner(space.one())
assert func(x) == pytest.approx(expected_result)
# Test gradient - expecting sign function
expected_result = func.domain.element(np.sign(x))
assert all_almost_equal(func.gradient(x), expected_result)
# Test proximal - expecting the following:
# | x_i + sigma, if x_i < -sigma
# z_i = { 0, if -sigma <= x_i <= sigma
# | x_i - sigma, if x_i > sigma
tmp = np.zeros(space.shape)
orig = x.asarray()
tmp[orig > sigma] = orig[orig > sigma] - sigma
tmp[orig < -sigma] = orig[orig < -sigma] + sigma
expected_result = space.element(tmp)
assert all_almost_equal(func.proximal(sigma)(x), expected_result)
# Test convex conjugate - expecting 0 if |x|_inf <= 1, infty else
func_cc = func.convex_conj
> norm_larger_than_one = 1.1 * x / np.max(np.abs(x))
test/solvers/functional/default_functionals_test.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../venv/lib/python3.5/site-packages/numpy/core/fromnumeric.py:2505: in amax
initial=initial)
../venv/lib/python3.5/site-packages/numpy/core/fromnumeric.py:86: in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
space/pspace.py:1083: in __array_wrap__
return self.space.element(array)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = ProductSpace(uniform_discr(0.0, 1.0, 7), 2), inp = array(1.3925678263326573), cast = True
def element(self, inp=None, cast=True):
"""Create an element in the product space.
Parameters
----------
inp : optional
If ``inp`` is ``None``, a new element is created from
scratch by allocation in the spaces. If ``inp`` is
already an element of this space, it is re-wrapped.
Otherwise, a new element is created from the
components by calling the ``element()`` methods
in the component spaces.
cast : bool, optional
If ``True``, casting is allowed. Otherwise, a ``TypeError``
is raised for input that is not a sequence of elements of
the spaces that make up this product space.
Returns
-------
element : `ProductSpaceElement`
The new element
Examples
--------
>>> r2, r3 = odl.rn(2), odl.rn(3)
>>> vec_2, vec_3 = r2.element(), r3.element()
>>> r2x3 = ProductSpace(r2, r3)
>>> vec_2x3 = r2x3.element()
>>> vec_2.space == vec_2x3[0].space
True
>>> vec_3.space == vec_2x3[1].space
True
Create an element of the product space
>>> r2, r3 = odl.rn(2), odl.rn(3)
>>> prod = ProductSpace(r2, r3)
>>> x2 = r2.element([1, 2])
>>> x3 = r3.element([1, 2, 3])
>>> x = prod.element([x2, x3])
>>> x
ProductSpace(rn(2), rn(3)).element([
[ 1., 2.],
[ 1., 2., 3.]
])
"""
# If data is given as keyword arg, prefer it over arg list
if inp is None:
inp = [space.element() for space in self.spaces]
if inp in self:
return inp
> if len(inp) != len(self):
E TypeError: len() of unsized object
space/pspace.py:492: TypeError
Thanks for the details!
The reason for these failures seems to be a change in implementation of NumPy's reduction functions, see this commit: https://github.com/numpy/numpy/commit/b344da928f06d867b0a4f41746c1bcb759a2fd8f
The old implementation used separate implementations of those reductions, and the above commit changed that behavior to calling ufunc.reduce
for the corresponding ufunc. For instance np.sum
will call np.add.reduce
, and np.max
will call np.maximum.reduce
.
What ultimately triggers the error is that ufunc.reduce
calls again the object's __array_wrap__
when finished, to provide the opportunity for custom classes to re-wrap the result. Unfortunately, ProductSpace.element
is not suited for scalar input, which is a bug.
Before, the reduction would just return a NumPy scalar, so that problem did not occur.
The change in NumPy was introduced in version 1.16, so any 1.15 version should still work fine.
I just installed
odl
from source and tried to run the tests using:I get 6 errors which are best summed up by these lines:
These errors happen for these 2 tests:
I tried to dig in a little bit but as I am not familiar with the code yet it gets complex very soon for me. I can tell you that it's during the
np.max(np.abs(x))
on this line for example but not much more...