scidash / neuronunit

A package for data-driven validation of neuron and ion channel models using SciUnit
http://neuronunit.scidash.org
38 stars 24 forks source link

TimeConstantTest gives different results in Python 2.7 and 3.x #130

Closed rgerkin closed 6 years ago

rgerkin commented 6 years ago

This is preventing neuronunit/master from passing.

russelljjarvis commented 6 years ago

Hi @rgerkin Is the stack trace from the failure like this the output below? If so, this bug is effecting me too.

If so let me know when it is fixed/patched.


.
.
.
    /opt/conda/lib/python3.5/site-packages/sciunit-0.1.5.8-py3.5.egg/sciunit/__init__.py in judge(self, model, skip_incapable, stop_on_error, deep_error)
    308 
    309         if deep_error:
--> 310             score = self._judge(model, skip_incapable=skip_incapable)
    311         else:
    312             try:

    /opt/conda/lib/python3.5/site-packages/sciunit-0.1.5.8-py3.5.egg/sciunit/__init__.py in _judge(self, model, skip_incapable)
    256         self.check_capabilities(model, skip_incapable=skip_incapable)
    257         # 2.
--> 258         prediction = self.generate_prediction(model)
    259         self.last_model = model
    260         # 3.

    /home/jovyan/neuronunit/neuronunit/tests/passive.py in generate_prediction(self, model)
    125         """Implementation of sciunit.Test.generate_prediction."""
    126         i,vm = super(TimeConstantTest,self).generate_prediction(model)
--> 127         tau = self.__class__.get_tau(vm, i)
    128         tau = tau.simplified
    129         # Put prediction in a form that compute_score() can use.

    /home/jovyan/neuronunit/neuronunit/tests/passive.py in get_tau(cls, vm, i)
     47         stop = i['duration']+i['delay']-1*pq.ms # 1 ms before pulse end
     48         region = cls.get_segment(vm,start,stop)
---> 49         amplitude,tau,y0 = cls.exponential_fit(region, i['delay'])
     50         return tau
     51 

    /home/jovyan/neuronunit/neuronunit/tests/passive.py in exponential_fit(cls, segment, offset)
     76             return vm_fit.squeeze()
     77 
---> 78         popt, pcov = curve_fit(func, t, vm.squeeze(), p0=guesses) # Estimate starting values for better convergence
     79         #plt.plot(t,vm)
     80         #plt.plot(t,func(t,*popt))

    /opt/conda/lib/python3.5/site-packages/scipy/optimize/minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs)
    734         # Remove full_output from kwargs, otherwise we're passing it in twice.
    735         return_full = kwargs.pop('full_output', False)
--> 736         res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs)
    737         popt, pcov, infodict, errmsg, ier = res
    738         cost = np.sum(infodict['fvec'] ** 2)

    /opt/conda/lib/python3.5/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
    375     if not isinstance(args, tuple):
    376         args = (args,)
--> 377     shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
    378     m = shape[0]
    379     if n > m:

    /opt/conda/lib/python3.5/site-packages/scipy/optimize/minpack.py in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape)
     24 def _check_func(checker, argname, thefunc, x0, args, numinputs,
     25                 output_shape=None):
---> 26     res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
     27     if (output_shape is not None) and (shape(res) != output_shape):
     28         if (output_shape[0] != 1):

    /opt/conda/lib/python3.5/site-packages/scipy/optimize/minpack.py in func_wrapped(params)
    452     if transform is None:
    453         def func_wrapped(params):
--> 454             return func(xdata, *params) - ydata
    455     elif transform.ndim == 1:
    456         def func_wrapped(params):

    /home/jovyan/neuronunit/neuronunit/tests/passive.py in func(x, a, b, c)
     73             '''
     74             vm_fit[:offset] = c
---> 75             vm_fit[offset:,0] = a * np.exp(-t[offset:]/b) + c
     76             return vm_fit.squeeze()
     77 
rgerkin commented 6 years ago

No, there is no stack trace, just two different results in the different version of Python (i.e. two different scores).

russelljjarvis commented 6 years ago

Okay. Even though this might be an unrelated issue the get_tau.ipynb test fails now:

ipython nbconvert --to python get_tau.ipynb

Produces this file

Workflow: Comment line13 # get_ipython().magic('matplotlib inline')

ipython get_tau.py 
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
/home/jovyan/neuronunit/unit_test/get_tau.py in <module>()
     50 tau = 7*pq.ms
     51 
---> 52 vm = make_sweep(total_time,amplitude,offset_time,tau)
     53 
     54 plt.plot(vm.times.rescale('ms'),vm)

/home/jovyan/neuronunit/unit_test/get_tau.py in make_sweep(total_time, amplitude, offset_time, tau)
     39     samples_until_offset = int(samples_until_offset)
     40     exponential = amplitude*np.exp(-(times[samples_until_offset:]-offset_time)/tau)
---> 41     vm[samples_until_offset:,0] += exponential.reshape(-1,1)
     42     return vm
     43 

/opt/conda/lib/python3.5/site-packages/neo-0.4.0-py3.5.egg/neo/core/analogsignal.py in __getitem__(self, i)
    198         Get the item or slice :attr:`i`.
    199         '''
--> 200         obj = super(BaseAnalogSignal, self).__getitem__(i)
    201         if isinstance(obj, BaseAnalogSignal):
    202             # update t_start and sampling_rate

/opt/conda/lib/python3.5/site-packages/quantities-0.11.1-py3.5.egg/quantities/quantity.py in __getitem__(self, key)
    346     @with_doc(np.ndarray.__getitem__)
    347     def __getitem__(self, key):
--> 348         ret = super(Quantity, self).__getitem__(key)
    349         if isinstance(ret, Quantity):
    350             return ret
russelljjarvis commented 6 years ago

This possibly unrelated error I am seeing is just a dimensionality/syntax problem. I suspect it has something to do with the interaction between neo and python3.5

In get_tau.py test:

---> 41     vm[samples_until_offset:,0] += exponential.reshape(-1,1)

could be

---> 41     vm[samples_until_offset:] += exponential

This yields: Estimated tau = 7.195 ms; Actual tau = 7.0 ms, thus satisfying the get_tau test.

Likewise, line 75 of passive reads as:

            vm_fit[offset:,0] = a * np.exp(-t[offset:]/b) + c

But it could be:

            vm_fit[offset:] = a * np.exp(-t[offset:]/b) + c
rgerkin commented 6 years ago

@russelljjarvis Is this still an issue? I think it may have just been a matter of which version of neo was in use.