Closed dopplershift closed 9 years ago
Let's settle the API with 2 state variables and then do the tests? I'll come up with some friction situations that will test things out.
Now that we are getting settled on a structure, we should decide how to do this. We could use model runs from other codes as comparison, but probably just making sure we are self-consistant is the best plan?
This comes down to: what have you done to convince yourself this is working? Whatever that is, turn it into a test. To me, this would at a minimum be checking the results from a run; you should be able to get away with only a few points.
numpy.testing
has functions to make it easy to compare floating arrays.
Ok. I did a few simple test cases. If these look like they are in the right direction, I'll do a bunch more for the other state laws and multi-state problems
They look like a good start. Personally, I had envisioned running the model requesting about 6-10 times, and assuming the run was good, keeping those values statically coded for checking.
If you're happy just checking max and min, then by all means just use those. The guiding principle should be: get to the point where you completely trust your tests--if the tests pass, then I can publish using my code.
That's probably a better solution. I was checking the location of the max/min as well to kind of do that, but I think your suggestion is cleaner. I should also add some tests for steady state conditions.
Exactly. Try to automate how you would validate.
On May 30, 2015, at 07:29, John Leeman notifications@github.com wrote:
That's probably a better solution. I was checking the location of the max/min as well to kind of do that, but I think your suggestion is cleaner. I should also add some tests for steady state conditions.
— Reply to this email directly or view it on GitHub.
Hmm. If I run the model with "full" time output, then down sample, I get a test that passes:
class TestDeiterichOneStateVar2(object):
def setup(self):
self.model = rsf.Model()
self.model.mu0 = 0.6
self.model.a = 0.005
self.model.k = 1e-3
self.model.v = 1.
self.model.vref = 1.
state1 = rsf.DieterichState(self.model)
state1.b = 0.01
state1.Dc = 10.
self.model.state_relations = [state1]
self.model.time = np.arange(0,40.01,0.01)
lp_velocity = np.ones_like(self.model.time)
lp_velocity[10*100:] = 10.
self.model.loadpoint_velocity = lp_velocity
self.model.solve()
def test_friction(self):
mu_true = np.array([ 0.6 , 0.6 , 0.6 , 0.6 , 0.6 ,
0.6 , 0.6 , 0.6 , 0.6 , 0.6 ,
0.6 , 0.60720485, 0.60186637, 0.5840091 , 0.58563096,
0.58899543, 0.58998186, 0.58879751, 0.58803412, 0.58825653,
0.58858313, 0.58860319, 0.58848625, 0.58844431, 0.58847472,
0.58849913, 0.58849522, 0.58848506, 0.58848348, 0.58848674,
0.5884883 , 0.58848756, 0.58848677, 0.5884868 , 0.58848711,
0.58848718, 0.5884871 , 0.58848704, 0.58848706, 0.58848708,
0.58848708])
np.testing.assert_almost_equal(self.model.results.friction[::100], mu_true, 8)
If I only request answers at those 41 times, then I get all 0.6 friction for the times, even when setting hmax to 0.01 seconds. Hmmm.
class TestDeiterichOneStateVar2(object):
def setup(self):
self.model = rsf.Model()
self.model.mu0 = 0.6
self.model.a = 0.005
self.model.k = 1e-3
self.model.v = 1.
self.model.vref = 1.
state1 = rsf.DieterichState(self.model)
state1.b = 0.01
state1.Dc = 10.
self.model.state_relations = [state1]
self.model.time = np.arange(0,40.01,1)
lp_velocity = np.ones_like(self.model.time)
lp_velocity[10*100:] = 10.
self.model.loadpoint_velocity = lp_velocity
self.model.solve(hmax=0.01)
def test_friction(self):
mu_true = np.array([ 0.6 , 0.6 , 0.6 , 0.6 , 0.6 ,
0.6 , 0.6 , 0.6 , 0.6 , 0.6 ,
0.6 , 0.60720485, 0.60186637, 0.5840091 , 0.58563096,
0.58899543, 0.58998186, 0.58879751, 0.58803412, 0.58825653,
0.58858313, 0.58860319, 0.58848625, 0.58844431, 0.58847472,
0.58849913, 0.58849522, 0.58848506, 0.58848348, 0.58848674,
0.5884883 , 0.58848756, 0.58848677, 0.5884868 , 0.58848711,
0.58848718, 0.5884871 , 0.58848704, 0.58848706, 0.58848708,
0.58848708])
np.testing.assert_almost_equal(self.model.results.friction, mu_true, 8)
Now that's weird. Can you tell if your step function is being called more often than the times you give it? I thought odeint()
was using an adaptive step size.
The step function gets called at lots of weird time increments as odeint()
does its adaptive sizing, but the problem was PEBKAC that took rested eyes to see. The lp_velocity[10*100:] = 10.
bit should be lp_velocity[10:] = 10.
with the new (reduced) time request. DOH. Updating all of the tests to use this kind of format now and adding some tests for displacement calculation.
Always good when things return to making sense.
I've been working on the tests. The one test that is tricky is for the slider displacement. Since it is calculated from slider velocity and time, if you take coarse steps, you get a different result. We generally don't really use this output, but maybe worth adding a warning about somewhere?
Well, from a testing perspective, we just want to make sure we get the answer we expect. Also, since the slider displacement is a repeated integral, I think it's enough to just check the last value (could add midpoint as well).
As far as a warning is concerned, I'd only go that far if you can reliably detect the condition. Maybe a note in the docstring?
Need to add some automated tests to help check if things break as new features are added.