Open willu47 opened 1 year ago
Hey @willu47! This is a good point and can indeed be a bit fiddly. As you implied, a separation of the unit tests in
.vars
/.coeffs
str
is sensible. I would also prefer the first option as it yields more stable results. Another form of it could be achieved by manually reconstructing single constraints, like
m = Model()
coords=[range(10)]
x = m.add_variables(coords=coords, name="x")
y = m.add_variables(coords=coords, name="y")
mask = np.array([True, True, True, True, True, False, False, False, False, False])
con = m.add_constraints(x - y.shift(dim_0=1) >= 0, mask=mask)
# check the constraint at `0`
assert all(con.vars[0] == (x[0].label, -1)) # -1 is the mask value from the shift
# check the constraint at `1`
assert all(con.vars[1] == (x[1].label, y[0].label))
# checkt the constraint at `9`
# the following would be intuitive, but is not the case
assert all(con.vars[9] == -1)
# as the mask is applied to the labels only, probably we should change this
# instead it is
assert all(con.vars[9] == (x[9].label, y[8].label))
# and
assert con.labels[9] == -1
Could that improve the tests?
Hi @FabianHofmann. Many thanks for your patience with my long and wordy issues! I realise my original question was unclear - I was trying to work out exactly what I want to ask and I hope the following is easier to follow now.
I think there are two main use cases for writing tests. The first is for the development of linopy itself, and for linopy developers I think the option you present is acceptable. However, the second set of users of tests I perceive are users of linopy, like myself, who want to test that their model is correctly generated. For me, the readability of the tests is paramount - I want something as close to the model formulation as possible. For example, when I create a constraint, I see this pretty printed readout:
Constraint `CAa1_TotalNewCapacity` (REGION: 1, YEAR: 1, TECHNOLOGY: 2)
----------------------------------------------------------------------
[BB, 2016, gas_import]: 1.0 AccumulatedNewCapacity[BB, gas_import, 2016] - 1.0 NewCapacity[BB, gas_import, 2016] = 0
[BB, 2016, gas_plant]: 1.0 AccumulatedNewCapacity[BB, gas_plant, 2016] - 1.0 NewCapacity[BB, gas_plant, 2016] = 0
What I would like is to be able to test the constraint generation like so:
actual = m.add__my_constraint(data, model) # Function which adds the constraint to the model
assert actual['BB', '2016', 'gas_import'] == m.variables['AccumulatedNewCapacity']['BB', 'gas_import', 2016] - m.variables['NewCapacity']['BB', 'gas_import', 2016] == 0
assert actual['BB', '2016', 'gas_plant'] == m.variables['AccumulatedNewCapacity']['BB', 'gas_plant', 2016] - m.variables['NewCapacity']['BB', 'gas_plant', 2016] == 0
There are a few roadblocks I perceive at the moment to this exact syntax. The first is that ==
is over-ridden, but this could be solved with the addition of a testing.assert_constraint_equal
helper function. The second is that Constraint objects are not currently subscriptable. Could __getter__
access be provided using the labels of the coordinates as m.constraints['CAa1_TotalNewCapacity'].labels[0,0,1]
is not as readable?
Hey @willu47
I added some testing functions. Now the following syntax functionality is supported:
import pandas as pd
import linopy
from linopy.testing import assert_varequal, assert_linequal, assert_conequal
m = linopy.Model()
x = m.add_variables(coords=[pd.RangeIndex(10, name="first")], name="x")
assert_varequal(x, m.variables.x)
# or
assert_varequal(x, m.variables["x"])
y = m.add_variables(coords=[pd.RangeIndex(10, name="first")], name="y")
expr = 2 * x + y
assert_linequal(expr, 2 * x + y)
con = m.add_constraints(expr >= 3, name='con')
assert_conequal(con, m.constraints.con)
# con object gets labels when adding to model, cannot compare the original
# anonymous constraint with assigned constraint but we can do this
assert_conequal(expr >= 3, con.lhs >= con.rhs)
Hope that helps! Feel free to close the issue or leave open if there are more things to consider.
@FabianHofmann - many thanks for this. I'll try these new testing functions out and provide some feedback (if any) here, so I'll leave the issue open for now.
From a user perspective, what suggestions do you have regarding writing unit tests for models implemented in linopy? Are there any plans to improve the opportunities for comparing constraint and variable objects with pre-generated fixtures?
One key feature I'm looking for in a modelling framework is being able to test individual components and test the integration of these components into a complete optimisation model. I think this would make the process of developing and maintaining modelling frameworks considerably more robust and efficient. The task is particularly challenging because of the intertwined nature of constraints and variables in optimisation models. It is very difficult to effectively separate individual components. In practice, this means that testing is only possible with a complete and often complex model. Debugging can be tortuous because errors in constraint definitions, masks or conditional operators etc. do not manifest in an obvious way. There are degrees of freedom outside the control of the model developer - for example, solutions to an ill-posed optimisation problem can differ between solvers, be caused by floating point errors and solver tolerances etc.
From a readability perspective, it would be great to use in an assertion the printout you get returned from the constraint construction e.g.
[SIMPLICITY, HEATER, 2010]: 1.0 AccumulatedNewCapacity[SIMPLICITY, HEATER, 2010] - 1.0 NewCapacity[SIMPLICITY, HEATER, 2010]
However, this is a little brittle, and could result in tests failing if e.g. coefficient precision was updated, or the ordering of terms in the constraint were changed. From a mathematical perspective, it doesn't matter if the order of the terms in the constraint as long as they are all present and have the correct sign and coefficient.
Another option is to manually create the
linopy.Constraint
using a completely separatelinopy.Model
instance, and then assert that the actual and expected are equal. Note that for many linopy objects__equal__
is often overridden and doesn't allow comparison.It would also be good to think about how to use Mock effectively (e.g. to avoid creating variables when testing a constraint).
Example
Here's my new constraint, encapsulated in a function I can call at run time. Now, I would like to write a unit test, so that I can check that the constraint is correctly generated under different conditions. I might like to check corner cases, such as if there is a technology with an
OperationalLife
of 0 or write a regression test to ensure that behaviour doesn't change as dependencies are updated.And here are some options I have explored for testing:
Fixtures
Some fixtures to set up the parameters needed to test the constraint creation:
Tests
A now the test itself: