scidash / neuronunit

A package for data-driven validation of neuron and ion channel models using SciUnit
http://neuronunit.scidash.org
38 stars 24 forks source link

Add sanity check for NaN values to neuronunit test class #50

Closed russelljjarvis closed 7 years ago

russelljjarvis commented 7 years ago

The exhaustive search exposes parameter values that lead the membrane potential towards +np.inf (actually nan) I was thinking of creating a test sub class that just checks if current injections will lead to nan values, and then it just returns something like test=tests.sufficientData('None') for possibly all of the subtests that involve current injection.

The alternative is to add extra exception handling tests to each sub test class but I feel that this is less efficient. The sanity test is a type of test too, although its preliminary and does not involve comparisons to emperical data.

rgerkin commented 7 years ago

There are several score types that can be used when for various reasons there is not going to be a useful score:

sciunit.ErrorScore
scunit.NoneScore
scunit.NAScore
sciunit.TBDScore
sciunit.scores.InsufficientDataScore

You can read all the docstrings for those, but I think InsufficientDataScore is probably the one you want. ErrorScore is more for uncaught code execution errors, NoneScore and NAScore related to capabilities, and TBDScore for skipped tests. So you would just return an InsufficientDataScore (initialized with None) from your test when you get the inf membrane potential condition.

russelljjarvis commented 7 years ago

Okay, this is great. The code neuronunit code is already very mature.

Although previously I described how scoop needs objects to be pickeable, and that the neuronunit test class is not pickleable, and as a consequence I cannot rewrite the Rheobase test class, in a nice Object Orientated way (and for it to be parallel at the same time). Its still largely true that it would be difficult or impossible to write a parallel version of rheobase that fitted within the pre-existing OOP framework but this is because calls to futures.map must all happen under __main__() and also because the definitions that are being mapped by futures.map must be defined in top level 0 (they can't be nested in anything, including a class definition).

Probably the only reason the Neuronunit test suite, or test objects can't be pickeled is because the related data that they contain is of the type: NEURON Vector (Just In Time compiled C code, or some other non pickleable type). I can and should fix this in the future, by casting the time, and membrane potential vectors to numpy array's and using copy.copy to make dereference the python vectors, and then destroying the NEURON vectors, that where previously being referred too.

However that being said, all of the tests besides the Rheobase search are relatively computationally light. I have since come to understanding that if I do create new types of tests, or test components I can instantiate them and append them to the list called test_class_params that you defined below. If possible I should try to embody the serial parts of my code inside objects in a similar pattern.

    tests += [nu_tests.RheobaseTest(observation=observation)]
    test_class_params = [(nu_tests.InputResistanceTest,None),
                         (nu_tests.TimeConstantTest,None),
                         (nu_tests.CapacitanceTest,None),
                         (nu_tests.RestingPotentialTest,None),
                         (nu_tests.InjectedCurrentAPWidthTest,None),
                         (nu_tests.InjectedCurrentAPAmplitudeTest,None),
                         (nu_tests.InjectedCurrentAPThresholdTest,None)]
rgerkin commented 7 years ago

You can write a script to try to recursively pickle the Test, via it's __dict__ attribute, and consequently determine what are the parts that cannot be pickled. Let me know if you need any help with that.

russelljjarvis commented 7 years ago

I don't think it will be hard to achieve. Its more just finding the time to do it. Its currently not an obstacle for solving the optimization problem, or the exhaustive search, its just a medium/long term plan for making my implementation simpler and more maintainable. I will work on it next week and i will let you know if I can't isolate it to the neuron Vectors in the related_data attribute of the extended score pandas data frame.

rgerkin commented 7 years ago

All of this should be handled in the constructor, which can run a sanity check. I need to add a check_run_params method to the Test.judge method, which can check to make sure that the parameters being provided (stimulus parameters) for example, are sane. For example, if the run_params had a injected_current key, and that in turn had a ampl value of np.Inf, then it would just return an error message which Test.judge would use to return a sciunit.ErrorScore. Maybe these should just be associated with Capabilities, so that the unreasonable values of injected current would be associated with the inject_current method.

Or if we are talking about model parameters themselves, then there should be a check_params method in the model constructor that does not allow models with ridiculous parameters to be constructed.

rgerkin commented 7 years ago

@russelljjarvis In 46ba224425b2cdd4a71cf2a9a5992983c052b918 I added the following things:

In both cases you can raise a sciunit.BadParameterValueError (which takes two arguments, a parameter name and a value). Since you may want to just skip this model instead you can also just create the sciunit.BadParameterValueError (i.e. declare a variable but don't raise it) and then return a sciunit.ErrorScore with that exception as the score.

rgerkin commented 7 years ago

You can raise an exception at any time (for example within one of the model methods). You could check that the vm is reasonable at the end of get_membrane_potential, and if it is not you can raise some exception (any one you want, e.g. ValueError) within that method. Then, if you pass the stop_on_error=False keyword argument to judge, the score returned by judge will be either a regular kind of score (if there is no error) or it will be a sciunit.ErrorScore if an exception was raised. Then you also need, within the optimizer, to check to see which kind of score it is when you extract the sort_key, making all ErrorScores give a sort_key of 0 since they reflect the worst model conditions:

# Run the model, then: 
if isinstance(my_score,sciunit.ErrorScore):
    return 0
else:
    return my_score.sort_key
russelljjarvis commented 7 years ago

I like this proposed solution. I have deleted __sanity_test from tests/init__.py and added in appropriate code to backends/ NEURON get_membrane_potential.

I have made it so get_membrane_potential calls:


def nan_inf_test(self,mp):
        '''
        Check if a HOC recording vector of membrane potential contains nans or infinities.
        Also check if it does not perturb due to stimulating current injection
        '''
        import math

        x = np.array(mp).std() # This clause asks does the MP even change?
        if x <= 0:
            return False

        for i in mp:
            assert type(i) == np.float64:
            if math.isnan(i) or (i == float('inf')) or (i == float('-inf')): 
           # The above clause asks does the MP consist of only \
           # experimentally valid data types?
                return False
        return True

Then get_membrane_potential contains something like:


        if boolean:
            return AnalogSignal( \
                     fixedSignalcp, \
                     units = mV, \
                     sampling_period = dt_py * ms \
            )
        else:
            from sciunit import ErrorScore
            return ErrorScore # or arguably scunit.NoneScore

I have not written the code to check if the stimulation protocol parameters are also appropriate.

I agree with the block:

# Run the model, then: 
if isinstance(my_score,sciunit.ErrorScore):
    return 0
else:
    return my_score.sort_key

inside the optimizer.

russelljjarvis commented 7 years ago

Actually its very slow to constantly check elements of the membrane potential lists inside the backends.py get_membrane_potential method.

I am now calling get_neab.suite.judge as bellow. I wonder if that will be sufficient.

score = get_neab.suite.judge(model,stop_on_error = False)
rgerkin commented 7 years ago

@russelljjarvis I think we have the building blocks in place for this now. Reopen this issue if one of them doesn't work for you.