NREL / floris

A controls-oriented engineering wake model.
http://nrel.github.io/floris
BSD 3-Clause "New" or "Revised" License
200 stars 154 forks source link

`test_random_search_layout_opt` is failing locally on my machine but not on github #936

Closed paulf81 closed 1 month ago

paulf81 commented 1 month ago

test_random_search_layout_opt is failing locally on my machine but not on github

I think this must be a difference in package version, or os, but on the version of develop (current July 4), running test_random_search_layout_opt produces the error:

/Users/pfleming/Projects/FLORIS/floris/tests/reg_tests/random_search_layout_opt_regression_test.py::test_random_search_layout_opt failed: sample_inputs_fixture = <tests.conftest.SampleInputs object at 0x7fefe38b8640>

    def test_random_search_layout_opt(sample_inputs_fixture):
        """
        The SciPy optimization method optimizes turbine layout using SciPy's minimize method. This test
        compares the optimization results from the SciPy layout optimization for a simple farm with a
        simple wind rose to stored baseline results.
        """
        sample_inputs_fixture.core["wake"]["model_strings"]["velocity_model"] = VELOCITY_MODEL
        sample_inputs_fixture.core["wake"]["model_strings"]["deflection_model"] = DEFLECTION_MODEL

        boundaries = [(0.0, 0.0), (0.0, 1000.0), (1000.0, 1000.0), (1000.0, 0.0), (0.0, 0.0)]

        fmodel = FlorisModel(sample_inputs_fixture.core)
        wd_array = np.arange(0, 360.0, 5.0)
        ws_array = 8.0 * np.ones_like(wd_array)

        wind_rose = WindRose(
            wind_directions=wd_array,
            wind_speeds=ws_array,
            ti_table=0.1,
        )
        D = 126.0 # Rotor diameter for the NREL 5 MW
        fmodel.set(
            layout_x=[0.0, 5 * D, 10 * D],
            layout_y=[0.0, 0.0, 0.0],
            wind_data=wind_rose
        )

        layout_opt = LayoutOptimizationRandomSearch(
            fmodel=fmodel,
            boundaries=boundaries,
            min_dist_D=5,
            seconds_per_iteration=1,
            total_optimization_seconds=1,
            use_dist_based_init=False,
            random_seed=0,
        )
        sol = layout_opt.optimize()
        optimized_aep = sol[0]
        locations_opt = np.array([sol[1], sol[2]])

        if DEBUG:
            print(locations_opt)
            print(optimized_aep)

>       assert_results_arrays(locations_opt, locations_baseline_aep)

tests/reg_tests/random_search_layout_opt_regression_test.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test = array([[   0.        ,  440.07873658, 1260.        ],
       [   0.        ,  563.92108396,    0.        ]])
baseline = array([[   0.        ,  619.07183266, 1260.        ],
       [   0.        ,  499.88056089,    0.        ]])

    def assert_results_arrays(test: np.array, baseline: np.array):
        if np.shape(test) != np.shape(baseline):
            raise ValueError("test and baseline results have mismatched shapes.")

        for test_dim0, baseline_dim0 in zip(test, baseline):
            for test_dim1, baseline_dim1 in zip(test_dim0, baseline_dim0):
>               assert np.allclose(test_dim1, baseline_dim1)
E               assert False
E                +  where False = <function allclose at 0x7fefe0dd2fb0>(440.07873657855356, 619.07183266)
E                +    where <function allclose at 0x7fefe0dd2fb0> = np.allclose

tests/conftest.py:28: AssertionError

How to reproduce

Run test_random_search_layout_opt in develop on local machine

System Information

Python 3.10.4 (main, Mar 31 2022, 03:38:35) [Clang 12.0.0 ] on darwin
numpy==1.26.4
pandas==2.1.4
misi9170 commented 1 month ago

Interesting; presumably this was introduced when I updated the reg test comparison values in #934 .

Tests pass for me locally, with the following package versions:

python = 3.10.6
numpy = 1.25.1
pandas = 1.5.0
scipy = 1.11.1

@paulf81 , if you switch the DEBUG flag to True at the top of the file, what gets printed out for locations_opt?

paulf81 commented 1 month ago

Ah you're right, this appears to be a precision thing, I can only spot it when following assert_results_arrays in debug, but for example this now passes, if I change the final checks to:

    np.allclose(locations_opt, locations_baseline_aep, atol=1e-1)
    assert np.abs((optimized_aep - baseline_aep)/baseline_aep) < 0.01

But as is both are just different enough to fail

misi9170 commented 1 month ago

Ok, I think we could switch to using np.allclose instead of the assert_results_arrays() to give us the tolerance options (or add optional atol and rtol keyword arguments to assert_results_arrays() for better future-proofing)?

paulf81 commented 1 month ago

Hi @misi9170 , somehow I was a bit wrong about this, the answers are just quite different if I don't run in debug mode.

I changed the DEBUG output to:

print(locations_baseline_aep)
print(locations_opt)
print('----------------')
print(baseline_aep)
print(optimized_aep)

If I run the test normally this is:

[[   0.          619.07183266 1260.        ]
 [   0.          499.88056089    0.        ]]
[[   0.          440.07873658 1260.        ]
 [   0.          563.92108396    0.        ]]
----------------
44798828639.17205
44841782747.77768

But if I run it in debug mode:

[[   0.          619.07183266 1260.        ]
 [   0.          499.88056089    0.        ]]
[[   0.          619.07183266 1260.        ]
 [   0.          499.88056089    0.        ]]
----------------
44798828639.17205
44798828639.17205

So I get the matched results when I run in debug, but pretty different if I just run :

pytest -rA tests/

misi9170 commented 1 month ago

Closed by #940