NOAA-ORR-ERD / PyGnome

The General NOAA Operational Modeling Environment
https://gnome.orr.noaa.gov/doc/pygnome/index.html
Other
54 stars 44 forks source link

Errors trying to build the db for OilLibrary #64

Open Brenosalv opened 4 years ago

Brenosalv commented 4 years ago

Hello, Following tutorial to install the Oil Library, I found some errors while building the database. I don't know why it's not building properly and I would like to know how I can solve this issue. I'm sending log file. I'll be waiting for response. Thanks in advance. DB_build_Log.txt

ChrisBarker-NOAA commented 4 years ago

I see a bunch of warnings, but no errors -- that is expected.

Is it working after install?

try:

pytest --pyargs oil_library

in the environment that you installed it in.

Brenosalv commented 4 years ago

At first, thank you for your response. Following the pyGNOME installation tutorial, I got some problems. 1 - Well, when I run py.test in OilLibrary directory, I believe the expected by the tutorial are only passed items but there are some skipped ones. Is it ok? [image: When I run py.test in OilLibrary directory.PNG] 2 - I continued with the tutorial and when I runned py.test in unit_tests directory and the result you can see below. Is it ok if I continue with the skipped and xfailed items? [image: When I run py.test in unit_tests directory.PNG] 3 - Well, I continued and when I runned py.test --runslow, some errors appeared as you can see below: [image: When I run py.test --runslow.PNG] 9 errors as you can see below: [image: py.test --runslow result.PNG] Could you help me to solve this problem? Thanks in advance.

Em qui., 27 de ago. de 2020 às 12:58, Chris Barker notifications@github.com escreveu:

I see a bunch of warnings, but no errors -- that is expected.

Is it working after install?

try:

pytest --pyargs oil_library

in the environment that you installed it in.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NOAA-ORR-ERD/PyGnome/issues/64#issuecomment-682038536, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOMZJUJJINA57SCERSECT5TSCZ7C3ANCNFSM4QM5KERA .

--

Atenciosamente,

Breno Salvador de Freitas

Graduando em Engenharia ElétricaUniversidade Federal de Campina Grande - UFCG

ChrisBarker-NOAA commented 4 years ago

Yes, you can ignore any warnings, known fails or skipped tests.

As for the --runslow errors -- your attachment didn't end up in the issue, so I can't see what they are. But I do confess that we don't run them on our CI, so sometimes they are failing without us knowing -- you should be fine if most of them pass, and you do'nt get any fatal failures from the regular tests.

I'll go check on the --runslow issue now though anyway.

Brenosalv commented 4 years ago

I will send you the issue file. It's attached below. Let me know if it helps you to solve the issue. As they are not only failures or skipped items, but are errors, I believe it's not possible to continue with the tutorial after the tests, so I wait for your response.

Em sex., 28 de ago. de 2020 às 20:28, Chris Barker notifications@github.com escreveu:

Yes, you can ignore any warnings, known fails or skipped tests.

As for the --runslow errors -- your attachment didn't end up in the issue, so I can't see what they are. But I do confess that we don't run them on our CI, so sometimes they are failing without us knowing -- you should be fine if most of them pass, and you do'nt get any fatal failures from the regular tests.

I'll go check on the --runslow issue now though anyway.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NOAA-ORR-ERD/PyGnome/issues/64#issuecomment-683188811, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOMZJUPHCTTMJZPZ7AJF2ZLSDA4TDANCNFSM4QM5KERA .

--

Atenciosamente,

Breno Salvador de Freitas

Graduando em Engenharia ElétricaUniversidade Federal de Campina Grande - UFCG

============================= test session starts ============================= platform win32 -- Python 2.7.15, pytest-4.6.4, py-1.9.0, pluggy-0.12.0 rootdir: C:\Users\Breno\PyGnome\py_gnome plugins: cov-2.10.1, timeout-1.4.2 collected 1302 items

tests\unit_tests\test_array_types.py ....... [ 0%] tests\unit_tests\test_cy_current_mover.py ...... [ 0%] tests\unit_tests\test_gnomeobject.py ............. [ 1%] tests\unit_tests\test_grids.py .. [ 2%] tests\unit_tests\test_imports.py ....... [ 2%] tests\unit_tests\test_model.py ..............................x.......... [ 5%] ........... [ 6%] tests\unit_tests\test_model_multiproc.py sssssssssssssss [ 7%] tests\unit_tests\test_save_load.py ..................................... [ 10%] .................................. [ 13%] tests\unit_tests\test_tideflats.py .... [ 13%] tests\unit_tests\test_ucode_filenames.py ....... [ 14%] tests\unit_tests\test_update_from_dict.py .... [ 14%] tests\unit_tests\test_cy\test_cy_cats_mover.py ...... [ 14%] tests\unit_tests\test_cy\test_cy_component_mover.py ....... [ 15%] tests\unit_tests\test_cy\test_cy_currentcycle_mover.py .. [ 15%] tests\unit_tests\test_cy\test_cy_grid_map.py ... [ 15%] tests\unit_tests\test_cy\test_cy_gridcurrent_mover.py .......... [ 16%] tests\unit_tests\test_cy\test_cy_grids.py .. [ 16%] tests\unit_tests\test_cy\test_cy_gridwind_mover.py ..... [ 17%] tests\unit_tests\test_cy\test_cy_helpers.py ... [ 17%] tests\unit_tests\test_cy\test_cy_land_check.py ......................... [ 19%] ......... [ 19%] tests\unit_tests\test_cy\test_cy_mover.py ..... [ 20%] tests\unit_tests\test_cy\test_cy_ossm_time.py ................. [ 21%] tests\unit_tests\test_cy\test_cy_random_mover.py ...... [ 22%] tests\unit_tests\test_cy\test_cy_random_vertical_mover.py ........ [ 22%] tests\unit_tests\test_cy\test_cy_rise_velocity_mover.py ... [ 22%] tests\unit_tests\test_cy\test_cy_shio_time.py ........ [ 23%] tests\unit_tests\test_cy\test_cy_wind_mover.py ........... [ 24%] tests\unit_tests\test_environment\test_env_obj_base.py ................. [ 25%] [ 25%] tests\unit_tests\test_environment\test_environment.py .................. [ 27%] ................... [ 28%] tests\unit_tests\test_environment\test_grid.py ...... [ 29%] tests\unit_tests\test_environment\test_running_average.py ........ [ 29%] tests\unit_tests\test_environment\test_tide.py ..... [ 30%] tests\unit_tests\test_environment\test_waves.py ........................ [ 31%] ............................... [ 34%] tests\unit_tests\test_environment\test_wind.py ......................... [ 36%] ........ [ 36%] tests\unit_tests\test_maps\test_map.py ................................. [ 39%] ..................xx... [ 41%] tests\unit_tests\test_maps\test_tideflat_map.py ....... [ 41%] tests\unit_tests\test_movers\test_cats_mover.py ............ [ 42%] tests\unit_tests\test_movers\test_component_mover.py ............. [ 43%] tests\unit_tests\test_movers\test_currentcycle_mover.py ........... [ 44%] tests\unit_tests\test_movers\test_gridcurrent_mover.py .......... [ 45%] tests\unit_tests\test_movers\test_gridwind_mover.py ......... [ 45%] tests\unit_tests\test_movers\test_ice_mover.py ..s. [ 46%] tests\unit_tests\test_movers\test_ice_wind_mover.py .... [ 46%] tests\unit_tests\test_movers\test_mover.py .......... [ 47%] tests\unit_tests\test_movers\test_out_of_time_interval.py . [ 47%] tests\unit_tests\test_movers\test_py_mover.py .......... [ 48%] tests\unit_tests\test_movers\test_random_mover.py ................ [ 49%] tests\unit_tests\test_movers\test_random_vertical_mover.py ..s... [ 49%] tests\unit_tests\test_movers\test_ship_drift_mover.py ....... [ 50%] tests\unit_tests\test_movers\test_simple_mover.py ... [ 50%] tests\unit_tests\test_movers\test_vertical_movers.py .... [ 50%] tests\unit_tests\test_movers\test_wind_mover.py ..................x..... [ 52%] .... [ 52%] tests\unit_tests\test_outputters\test_current_outputter.py .. [ 53%] tests\unit_tests\test_outputters\test_geojson.py ...... [ 53%] tests\unit_tests\test_outputters\test_ice_image_outputter.py ...... [ 54%] tests\unit_tests\test_outputters\test_ice_json_outputter.py .. [ 54%] tests\unit_tests\test_outputters\test_ice_outputter.py .. [ 54%] tests\unit_tests\test_outputters\test_json.py . [ 54%] tests\unit_tests\test_outputters\test_kmz.py ...... [ 54%] tests\unit_tests\test_outputters\test_netcdf_outputter.py ........FFFFFF [ 55%] FFF... [ 56%] tests\unit_tests\test_outputters\test_oil_budget_outputter.py .... [ 56%] tests\unit_tests\test_outputters\test_outputter.py ....... [ 57%] tests\unit_tests\test_outputters\test_renderer.py ............. [ 58%] tests\unit_tests\test_outputters\test_shape.py .... [ 58%] tests\unit_tests\test_outputters\test_weathering_outputter.py .. [ 58%] tests\unit_tests\test_persist\test_extend_colander.py ......... [ 59%] tests\unit_tests\test_persist\test_model_save_load.py ........... [ 60%] tests\unit_tests\test_persist\test_schema_decorator.py . [ 60%] tests\unit_tests\test_spill\test_le_data.py .... [ 60%] tests\unit_tests\test_spill\test_release.py ............ [ 61%] tests\unit_tests\test_spill\test_release_in_model.py .... [ 61%] tests\unit_tests\test_spill\test_spill.py ....xxxxx [ 62%] tests\unit_tests\test_spill\test_substance.py ............. [ 63%] tests\unit_tests\test_utilities\test_appearance.py .. [ 63%] tests\unit_tests\test_utilities\test_cache.py ......... [ 64%] tests\unit_tests\test_utilities\test_colormaps.py ..... [ 64%] tests\unit_tests\test_utilities\test_get_mem_use.py .X [ 64%] tests\unit_tests\test_utilities\test_graphs.py ss [ 65%] tests\unit_tests\test_utilities\test_helpers_convert.py ............ [ 66%] tests\unit_tests\test_utilities\test_inf_datetime.py ................... [ 67%] ....... [ 68%] tests\unit_tests\test_utilities\test_map_canvas.py ....... [ 68%] tests\unit_tests\test_utilities\test_ordered_collection.py ............. [ 69%] .................... [ 71%] tests\unit_tests\test_utilities\test_projections.py .................... [ 72%] ....................................................... [ 76%] tests\unit_tests\test_utilities\test_rand.py ......... [ 77%] tests\unit_tests\test_utilities\test_remote_data.py .. [ 77%] tests\unit_tests\test_utilities\test_time_utils.py ..................... [ 79%] ....... [ 79%] tests\unit_tests\test_utilities\test_timeseries.py ............. [ 80%] tests\unit_tests\test_utilities\test_transforms.py ....... [ 81%] tests\unit_tests\test_utilities\test_weathering_algorithms.py ....... [ 81%] tests\unit_tests\test_weatherers\test_bio_degradation.py sssssssssss [ 82%] tests\unit_tests\test_weatherers\test_cleanup.py ....................... [ 84%] ......... [ 85%] tests\unit_tests\test_weatherers\test_dispersion.py ....xxx.s [ 85%] tests\unit_tests\test_weatherers\test_dissolution.py ..s....xxXxxxxxxxxx [ 87%] xxxxxx. [ 87%] tests\unit_tests\test_weatherers\test_emulsification.py .......s [ 88%] tests\unit_tests\test_weatherers\test_evaporation.py ...s.....s [ 89%] tests\unit_tests\test_weatherers\test_manual_beaching.py ........s [ 90%] tests\unit_tests\test_weatherers\test_roc.py .....s................ [ 91%] tests\unit_tests\test_weatherers\test_spreading.py .........sX [ 92%] tests\unit_tests\test_weatherers\test_weatherer.py ... [ 92%] tests\unit_tests\test_weatherers\test_weathering_data.py ........... [ 93%] tests\unit_tests\test_scripting\test_time_utils.py ............ [ 94%] tests\unit_tests\test_tamoc\test_tamoc.py ssssss [ 95%] tests\unit_tests\test_tamoc\test_tamoc_spill.py . [ 95%] tests\unit_tests\test_utilities\test_file_tools\test_filescanner.py .... [ 95%] ......... [ 96%] tests\unit_tests\test_utilities\test_file_tools\test_haz_files.py ...... [ 96%] ........ [ 97%] tests\unit_tests\test_utilities\test_geometry\test_poly_clockwise.py ... [ 97%] ... [ 97%] tests\unit_tests\test_utilities\test_geometry\test_polygons.py ......... [ 98%] .......... [ 99%] tests\unit_tests\test_utilities\test_geometry\test_thin_polygons.py .... [ 99%] ...... [ 99%] tests\unit_tests\test_utilities\test_save_update\test_save_update.py .. [100%]

================================== FAILURES =================================== __ test_read_standard_arrays[1-True] __

model = <gnome.model.Model object at 0x0000000012E26128>, output_ts_factor = 1 use_time = True

@pytest.mark.slow
@pytest.mark.parametrize(("output_ts_factor", "use_time"),
                         [(1, True), (1, False),
                          (2.4, True), (2.4, False),
                          (3, True), (3, False)])
def test_read_standard_arrays(model, output_ts_factor, use_time):
    """
    tests the data returned by read_data is correct when `which_data` flag is
    'standard'. It is only reading the standard_arrays

    Test will only verify the data when time_stamp of model matches the
    time_stamp of data written out. output_ts_factor means not all data is
    written out.

    The use_time flag says data is read by timestamp. If false, then it is read
    by step number - either way, the result should be the same
    """
    model.rewind()

    # check contents of netcdf File at multiple time steps (should only be 1!)
    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.output_timestep = timedelta(seconds=model.time_step *
                                      output_ts_factor)
    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False

        for idx, step in enumerate(range(0, model.num_time_steps,
                                   int(ceil(output_ts_factor)))):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)
            if use_time:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    curr_time)
            else:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    index=idx)

            # check time
            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True

                # check standard variables
                assert np.allclose(scp.LE('positions', uncertain),
                                   nc_data['positions'], rtol, atol)
                assert np.all(scp.LE('spill_num', uncertain)[:] ==
                              nc_data['spill_num'])
                assert np.all(scp.LE('status_codes', uncertain)[:] ==
                              nc_data['status_codes'])

                # flag variable is not currently set or checked

                if 'mass' in scp.LE_data:
                    assert np.all(scp.LE('mass', uncertain)[:] ==
                                  nc_data['mass'])

                if 'age' in scp.LE_data:
                    assert np.all(scp.LE('age', uncertain)[:] ==
                                  nc_data['age'])

                if uncertain:
                    sc = scp.items()[1]
                else:
                    sc = scp.items()[0]
              assert sc.mass_balance == weathering_data

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdfoutputter.py:416: AssertionError ____ test_read_standard_arrays[1-False] __

model = <gnome.model.Model object at 0x0000000012E1FD30>, output_ts_factor = 1 use_time = False

@pytest.mark.slow
@pytest.mark.parametrize(("output_ts_factor", "use_time"),
                         [(1, True), (1, False),
                          (2.4, True), (2.4, False),
                          (3, True), (3, False)])
def test_read_standard_arrays(model, output_ts_factor, use_time):
    """
    tests the data returned by read_data is correct when `which_data` flag is
    'standard'. It is only reading the standard_arrays

    Test will only verify the data when time_stamp of model matches the
    time_stamp of data written out. output_ts_factor means not all data is
    written out.

    The use_time flag says data is read by timestamp. If false, then it is read
    by step number - either way, the result should be the same
    """
    model.rewind()

    # check contents of netcdf File at multiple time steps (should only be 1!)
    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.output_timestep = timedelta(seconds=model.time_step *
                                      output_ts_factor)
    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False

        for idx, step in enumerate(range(0, model.num_time_steps,
                                   int(ceil(output_ts_factor)))):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)
            if use_time:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    curr_time)
            else:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    index=idx)

            # check time
            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True

                # check standard variables
                assert np.allclose(scp.LE('positions', uncertain),
                                   nc_data['positions'], rtol, atol)
                assert np.all(scp.LE('spill_num', uncertain)[:] ==
                              nc_data['spill_num'])
                assert np.all(scp.LE('status_codes', uncertain)[:] ==
                              nc_data['status_codes'])

                # flag variable is not currently set or checked

                if 'mass' in scp.LE_data:
                    assert np.all(scp.LE('mass', uncertain)[:] ==
                                  nc_data['mass'])

                if 'age' in scp.LE_data:
                    assert np.all(scp.LE('age', uncertain)[:] ==
                                  nc_data['age'])

                if uncertain:
                    sc = scp.items()[1]
                else:
                    sc = scp.items()[0]
              assert sc.mass_balance == weathering_data

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdfoutputter.py:416: AssertionError ____ test_read_standardarrays[2.4-True] ____

model = <gnome.model.Model object at 0x00000000133AF828>, output_ts_factor = 2.4 use_time = True

@pytest.mark.slow
@pytest.mark.parametrize(("output_ts_factor", "use_time"),
                         [(1, True), (1, False),
                          (2.4, True), (2.4, False),
                          (3, True), (3, False)])
def test_read_standard_arrays(model, output_ts_factor, use_time):
    """
    tests the data returned by read_data is correct when `which_data` flag is
    'standard'. It is only reading the standard_arrays

    Test will only verify the data when time_stamp of model matches the
    time_stamp of data written out. output_ts_factor means not all data is
    written out.

    The use_time flag says data is read by timestamp. If false, then it is read
    by step number - either way, the result should be the same
    """
    model.rewind()

    # check contents of netcdf File at multiple time steps (should only be 1!)
    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.output_timestep = timedelta(seconds=model.time_step *
                                      output_ts_factor)
    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False

        for idx, step in enumerate(range(0, model.num_time_steps,
                                   int(ceil(output_ts_factor)))):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)
            if use_time:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    curr_time)
            else:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    index=idx)

            # check time
            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True

                # check standard variables
                assert np.allclose(scp.LE('positions', uncertain),
                                   nc_data['positions'], rtol, atol)
                assert np.all(scp.LE('spill_num', uncertain)[:] ==
                              nc_data['spill_num'])
                assert np.all(scp.LE('status_codes', uncertain)[:] ==
                              nc_data['status_codes'])

                # flag variable is not currently set or checked

                if 'mass' in scp.LE_data:
                    assert np.all(scp.LE('mass', uncertain)[:] ==
                                  nc_data['mass'])

                if 'age' in scp.LE_data:
                    assert np.all(scp.LE('age', uncertain)[:] ==
                                  nc_data['age'])

                if uncertain:
                    sc = scp.items()[1]
                else:
                    sc = scp.items()[0]
              assert sc.mass_balance == weathering_data

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdf_outputter.py:416: AssertionError ____ test_read_standardarrays[2.4-False] ____

model = <gnome.model.Model object at 0x0000000014222358>, output_ts_factor = 2.4 use_time = False

@pytest.mark.slow
@pytest.mark.parametrize(("output_ts_factor", "use_time"),
                         [(1, True), (1, False),
                          (2.4, True), (2.4, False),
                          (3, True), (3, False)])
def test_read_standard_arrays(model, output_ts_factor, use_time):
    """
    tests the data returned by read_data is correct when `which_data` flag is
    'standard'. It is only reading the standard_arrays

    Test will only verify the data when time_stamp of model matches the
    time_stamp of data written out. output_ts_factor means not all data is
    written out.

    The use_time flag says data is read by timestamp. If false, then it is read
    by step number - either way, the result should be the same
    """
    model.rewind()

    # check contents of netcdf File at multiple time steps (should only be 1!)
    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.output_timestep = timedelta(seconds=model.time_step *
                                      output_ts_factor)
    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False

        for idx, step in enumerate(range(0, model.num_time_steps,
                                   int(ceil(output_ts_factor)))):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)
            if use_time:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    curr_time)
            else:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    index=idx)

            # check time
            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True

                # check standard variables
                assert np.allclose(scp.LE('positions', uncertain),
                                   nc_data['positions'], rtol, atol)
                assert np.all(scp.LE('spill_num', uncertain)[:] ==
                              nc_data['spill_num'])
                assert np.all(scp.LE('status_codes', uncertain)[:] ==
                              nc_data['status_codes'])

                # flag variable is not currently set or checked

                if 'mass' in scp.LE_data:
                    assert np.all(scp.LE('mass', uncertain)[:] ==
                                  nc_data['mass'])

                if 'age' in scp.LE_data:
                    assert np.all(scp.LE('age', uncertain)[:] ==
                                  nc_data['age'])

                if uncertain:
                    sc = scp.items()[1]
                else:
                    sc = scp.items()[0]
              assert sc.mass_balance == weathering_data

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdf_outputter.py:416: AssertionError ------------------------------ Captured log call ------------------------------ WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. __ test_read_standard_arrays[3-True] __

model = <gnome.model.Model object at 0x0000000010DFA588>, output_ts_factor = 3 use_time = True

@pytest.mark.slow
@pytest.mark.parametrize(("output_ts_factor", "use_time"),
                         [(1, True), (1, False),
                          (2.4, True), (2.4, False),
                          (3, True), (3, False)])
def test_read_standard_arrays(model, output_ts_factor, use_time):
    """
    tests the data returned by read_data is correct when `which_data` flag is
    'standard'. It is only reading the standard_arrays

    Test will only verify the data when time_stamp of model matches the
    time_stamp of data written out. output_ts_factor means not all data is
    written out.

    The use_time flag says data is read by timestamp. If false, then it is read
    by step number - either way, the result should be the same
    """
    model.rewind()

    # check contents of netcdf File at multiple time steps (should only be 1!)
    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.output_timestep = timedelta(seconds=model.time_step *
                                      output_ts_factor)
    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False

        for idx, step in enumerate(range(0, model.num_time_steps,
                                   int(ceil(output_ts_factor)))):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)
            if use_time:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    curr_time)
            else:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    index=idx)

            # check time
            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True

                # check standard variables
                assert np.allclose(scp.LE('positions', uncertain),
                                   nc_data['positions'], rtol, atol)
                assert np.all(scp.LE('spill_num', uncertain)[:] ==
                              nc_data['spill_num'])
                assert np.all(scp.LE('status_codes', uncertain)[:] ==
                              nc_data['status_codes'])

                # flag variable is not currently set or checked

                if 'mass' in scp.LE_data:
                    assert np.all(scp.LE('mass', uncertain)[:] ==
                                  nc_data['mass'])

                if 'age' in scp.LE_data:
                    assert np.all(scp.LE('age', uncertain)[:] ==
                                  nc_data['age'])

                if uncertain:
                    sc = scp.items()[1]
                else:
                    sc = scp.items()[0]
              assert sc.mass_balance == weathering_data

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdfoutputter.py:416: AssertionError ____ test_read_standard_arrays[3-False] __

model = <gnome.model.Model object at 0x0000000014227828>, output_ts_factor = 3 use_time = False

@pytest.mark.slow
@pytest.mark.parametrize(("output_ts_factor", "use_time"),
                         [(1, True), (1, False),
                          (2.4, True), (2.4, False),
                          (3, True), (3, False)])
def test_read_standard_arrays(model, output_ts_factor, use_time):
    """
    tests the data returned by read_data is correct when `which_data` flag is
    'standard'. It is only reading the standard_arrays

    Test will only verify the data when time_stamp of model matches the
    time_stamp of data written out. output_ts_factor means not all data is
    written out.

    The use_time flag says data is read by timestamp. If false, then it is read
    by step number - either way, the result should be the same
    """
    model.rewind()

    # check contents of netcdf File at multiple time steps (should only be 1!)
    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.output_timestep = timedelta(seconds=model.time_step *
                                      output_ts_factor)
    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False

        for idx, step in enumerate(range(0, model.num_time_steps,
                                   int(ceil(output_ts_factor)))):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)
            if use_time:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    curr_time)
            else:
                (nc_data, weathering_data) = NetCDFOutput.read_data(file_,
                                                                    index=idx)

            # check time
            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True

                # check standard variables
                assert np.allclose(scp.LE('positions', uncertain),
                                   nc_data['positions'], rtol, atol)
                assert np.all(scp.LE('spill_num', uncertain)[:] ==
                              nc_data['spill_num'])
                assert np.all(scp.LE('status_codes', uncertain)[:] ==
                              nc_data['status_codes'])

                # flag variable is not currently set or checked

                if 'mass' in scp.LE_data:
                    assert np.all(scp.LE('mass', uncertain)[:] ==
                                  nc_data['mass'])

                if 'age' in scp.LE_data:
                    assert np.all(scp.LE('age', uncertain)[:] ==
                                  nc_data['age'])

                if uncertain:
                    sc = scp.items()[1]
                else:
                    sc = scp.items()[0]
              assert sc.mass_balance == weathering_data

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdf_outputter.py:416: AssertionError ------------------------------ Captured log call ------------------------------ WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. WARNING gnome.maps.map.MapFromBNA:map.py:314 All particles left the map this timestep. ____ test_read_allarrays ____

model = <gnome.model.Model object at 0x0000000013D59518>

@pytest.mark.slow
def test_read_all_arrays(model):
    """
    tests the data returned by read_data is correct
    when `which_data` flag is 'all'.
    """
    model.rewind()

    o_put = [model.outputters[outputter.id]
             for outputter in model.outputters
             if isinstance(outputter, NetCDFOutput)][0]

    o_put.which_data = 'all'

    _run_model(model)

    atol = 1e-5
    rtol = 0

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        _found_a_matching_time = False
        for step in range(model.num_time_steps):
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)

            (nc_data, mb) = NetCDFOutput.read_data(file_, curr_time,
                                                   which_data='all')

            if curr_time == nc_data['current_time_stamp'].item():
                _found_a_matching_time = True
                for key in scp.LE_data:
                    if key == 'current_time_stamp':
                        """ already matched """
                        continue
                    elif key == 'positions':
                        assert np.allclose(scp.LE('positions', uncertain),
                                           nc_data['positions'], rtol, atol)
                    elif key == 'mass_balance':
                      assert scp.LE(key, uncertain) == mb

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdf_outputter.py:472: AssertionError ____ test_write_output_post_run[1] ____

model = <gnome.model.Model object at 0x000000000C536240>, output_ts_factor = 1

@pytest.mark.slow
@pytest.mark.parametrize("output_ts_factor", [1, 2])
def test_write_output_post_run(model, output_ts_factor):
    """
    Create netcdf file post run from the cache. Under the hood, it is simply
    calling write_output so no need to check the data is correctly written
    test_write_output_standard already checks data is correctly written.

    Instead, make sure if output_timestep is not same as model.time_step,
    then data is output at correct time stamps
    """
    model.rewind()

    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.which_data = 'standard'
    o_put.output_timestep = timedelta(seconds=model.time_step * output_ts_factor)

    del model.outputters[o_put.id]  # remove from list of outputters

    _run_model(model)

    # clear out old files...
    o_put.clean_output_files()
    assert not os.path.exists(o_put.filename)

    if o_put._u_filename:
        assert (not os.path.exists(o_put._u_filename))

    # now write netcdf output
    o_put.write_output_post_run(model.start_time,
                                model.num_time_steps,
                                spills=model.spills,
                                cache=model._cache,
                                uncertain=model.uncertain)

    assert os.path.exists(o_put.filename)
    if model.uncertain:
        assert os.path.exists(o_put._u_filename)

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        ix = 0  # index for grabbing record from NetCDF file
        for step in range(0, model.num_time_steps,
                          int(ceil(output_ts_factor))):
            print "step: {0}".format(step)
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)

            (nc_data, mb) = NetCDFOutput.read_data(file_, curr_time)
            assert curr_time == nc_data['current_time_stamp'].item()

            # test to make sure data_by_index is consistent with _cached data
            # This is just to double check that getting the data by curr_time
            # does infact give the next consecutive index
            (data_by_index, mb) = NetCDFOutput.read_data(file_, index=ix)
            assert curr_time == data_by_index['current_time_stamp'].item()
          assert scp.LE('mass_balance', uncertain) == mb

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdf_outputter.py:543: AssertionError ---------------------------- Captured stdout call ----------------------------- step: 0 ____ test_write_output_post_run[2] ____

model = <gnome.model.Model object at 0x0000000014672DA0>, output_ts_factor = 2

@pytest.mark.slow
@pytest.mark.parametrize("output_ts_factor", [1, 2])
def test_write_output_post_run(model, output_ts_factor):
    """
    Create netcdf file post run from the cache. Under the hood, it is simply
    calling write_output so no need to check the data is correctly written
    test_write_output_standard already checks data is correctly written.

    Instead, make sure if output_timestep is not same as model.time_step,
    then data is output at correct time stamps
    """
    model.rewind()

    o_put = [model.outputters[outputter.id] for outputter in
             model.outputters if isinstance(outputter, NetCDFOutput)][0]
    o_put.which_data = 'standard'
    o_put.output_timestep = timedelta(seconds=model.time_step * output_ts_factor)

    del model.outputters[o_put.id]  # remove from list of outputters

    _run_model(model)

    # clear out old files...
    o_put.clean_output_files()
    assert not os.path.exists(o_put.filename)

    if o_put._u_filename:
        assert (not os.path.exists(o_put._u_filename))

    # now write netcdf output
    o_put.write_output_post_run(model.start_time,
                                model.num_time_steps,
                                spills=model.spills,
                                cache=model._cache,
                                uncertain=model.uncertain)

    assert os.path.exists(o_put.filename)
    if model.uncertain:
        assert os.path.exists(o_put._u_filename)

    uncertain = False
    for file_ in (o_put.filename, o_put._u_filename):
        ix = 0  # index for grabbing record from NetCDF file
        for step in range(0, model.num_time_steps,
                          int(ceil(output_ts_factor))):
            print "step: {0}".format(step)
            scp = model._cache.load_timestep(step)
            curr_time = scp.LE('current_time_stamp', uncertain)

            (nc_data, mb) = NetCDFOutput.read_data(file_, curr_time)
            assert curr_time == nc_data['current_time_stamp'].item()

            # test to make sure data_by_index is consistent with _cached data
            # This is just to double check that getting the data by curr_time
            # does infact give the next consecutive index
            (data_by_index, mb) = NetCDFOutput.read_data(file_, index=ix)
            assert curr_time == data_by_index['current_time_stamp'].item()
          assert scp.LE('mass_balance', uncertain) == mb

E AssertionError: assert {'amount_rele...ed': 0.0, ...} == {'amount_relea...e=1e+20), ...} E Omitting 7 identical items, use -vv to show E Right contains 1 more item: E {u'non_weathering': masked} E Use -v to get the full diff

tests\unit_tests\test_outputters\test_netcdf_outputter.py:543: AssertionError ---------------------------- Captured stdout call ----------------------------- step: 0 ============================== warnings summary =============================== C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages_pytest\mark\structures.py:334 C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages_pytest\mark\structures.py:334: PytestUnknownMarkWarning: Unknown pytest.mark.slow - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html PytestUnknownMarkWarning,

tests/unit_tests/test_model.py::test_staggered_spills_weathering[delay0] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\scipy\optimize\minpack.py:787: OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning)

tests/unit_tests/test_weatherers/test_dispersion.py::test_fullrun[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:3234: SADeprecationWarning: SessionExtension is deprecated in favor of the SessionEvents listener interface. The Session.extension parameter will be removed in a future release. return self.class(**local_kw)

tests/unit_tests/test_weatherers/test_dispersion.py::test_full_run[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:863: SADeprecationWarning: SessionExtension.before_commit is deprecated. The SessionExtension class will be removed in a future release. Please transition to the @event interface, using @event.listens_for(Session, 'before_commit'). SessionExtension._adapt_listener(self, ext)

tests/unit_tests/test_weatherers/test_dispersion.py::test_full_run[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:863: SADeprecationWarning: SessionExtension.after_flush is deprecated. The SessionExtension class will be removed in a future release. Please transition to the @event interface, using @event.listens_for(Session, 'after_flush'). SessionExtension._adapt_listener(self, ext)

tests/unit_tests/test_weatherers/test_dispersion.py::test_full_run[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:863: SADeprecationWarning: SessionExtension.after_begin is deprecated. The SessionExtension class will be removed in a future release. Please transition to the @event interface, using @event.listens_for(Session, 'after_begin'). SessionExtension._adapt_listener(self, ext)

tests/unit_tests/test_weatherers/test_dispersion.py::test_full_run[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:863: SADeprecationWarning: SessionExtension.after_attach is deprecated. The SessionExtension class will be removed in a future release. Please transition to the @event interface, using @event.listens_for(Session, 'after_attach'). SessionExtension._adapt_listener(self, ext)

tests/unit_tests/test_weatherers/test_dispersion.py::test_full_run[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:863: SADeprecationWarning: SessionExtension.after_bulk_update is deprecated. The SessionExtension class will be removed in a future release. Please transition to the @event interface, using @event.listens_for(Session, 'after_bulk_update'). SessionExtension._adapt_listener(self, ext)

tests/unit_tests/test_weatherers/test_dispersion.py::test_full_run[ABU SAFAH-288.7-63.076] C:\Users\Breno\miniconda2\envs\gnome\lib\site-packages\sqlalchemy\orm\session.py:863: SADeprecationWarning: SessionExtension.after_bulk_delete is deprecated. The SessionExtension class will be removed in a future release. Please transition to the @event interface, using @event.listens_for(Session, 'after_bulk_delete'). SessionExtension._adapt_listener(self, ext)

tests/unit_tests/test_weatherers/test_dissolution.py::test_dissolution_k_ow[oil_bahia-311.15-3-1.03156e+12-True] c:\users\breno\pygnome\py_gnome\gnome\weatherers\dissolution.py:388: RuntimeWarning: invalid value encountered in true_divide N_drop_a = ((C_dis / K_ow).T * (k_w / 3600.0)).T

tests/unit_tests/test_weatherers/test_dissolution.py::test_dissolution_k_ow[oil_bahia-311.15-3-1.03156e+12-True] c:\users\breno\pygnome\py_gnome\gnome\weatherers\dissolution.py:456: RuntimeWarning: divide by zero encountered in true_divide (c_oil / k_ow))

tests/unit_tests/test_weatherers/test_dissolution.py::test_dissolution_k_ow[oil_bahia-311.15-3-1.03156e+12-True] c:\users\breno\pygnome\py_gnome\gnome\weatherers\dissolution.py:463: RuntimeWarning: invalid value encountered in multiply return np.nan_to_num(N_s * arom_mask)

tests/unit_tests/test_weatherers/test_dissolution.py::test_full_run[oil_ans_mp-288.7-55.34] c:\users\breno\pygnome\py_gnome\gnome\weatherers\dissolution.py:456: RuntimeWarning: invalid value encountered in true_divide (c_oil / k_ow))

tests/unit_tests/test_weatherers/test_evaporation.py::TestDecayConst::test_evap_decay_const_vary_numLE[end_time_delay0] c:\users\breno\pygnome\py_gnome\gnome\weatherers\spreading.py:602: RuntimeWarning: divide by zero encountered in power (thickness rel_buoy gravity)) ** (-0.3333333333333333)

tests/unit_tests/test_weatherers/test_evaporation.py::test_full_run[oil_6-333.0] c:\users\breno\pygnome\py_gnome\gnome\weatherers\spreading.py:602: RuntimeWarning: invalid value encountered in power (thickness rel_buoy gravity)) ** (-0.3333333333333333)

-- Docs: https://docs.pytest.org/en/latest/warnings.html = 9 failed, 1217 passed, 44 skipped, 29 xfailed, 3 xpassed, 15 warnings in 219.15 seconds =

Brenosalv commented 4 years ago

Hello, Have you analysed the errors that I sent by "log.txt"? Let me know if I can continue the installation even with those errors. I do not know if they are fatal errors or not.

Em sáb., 29 de ago. de 2020 às 00:30, Breno Salvador < brenosalvadordefreitas@gmail.com> escreveu:

I will send you the issue file. It's attached below. Let me know if it helps you to solve the issue. As they are not only failures or skipped items, but are errors, I believe it's not possible to continue with the tutorial after the tests, so I wait for your response.

Em sex., 28 de ago. de 2020 às 20:28, Chris Barker < notifications@github.com> escreveu:

Yes, you can ignore any warnings, known fails or skipped tests.

As for the --runslow errors -- your attachment didn't end up in the issue, so I can't see what they are. But I do confess that we don't run them on our CI, so sometimes they are failing without us knowing -- you should be fine if most of them pass, and you do'nt get any fatal failures from the regular tests.

I'll go check on the --runslow issue now though anyway.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NOAA-ORR-ERD/PyGnome/issues/64#issuecomment-683188811, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOMZJUPHCTTMJZPZ7AJF2ZLSDA4TDANCNFSM4QM5KERA .

--

Atenciosamente,

Breno Salvador de Freitas

Graduando em Engenharia ElétricaUniversidade Federal de Campina Grande - UFCG

--

Atenciosamente,

Breno Salvador de Freitas

Graduando em Engenharia ElétricaUniversidade Federal de Campina Grande - UFCG

ChrisBarker-NOAA commented 4 years ago

You're fine -- I'm afraid we're not good about keeping the "runslow" tests up to date. Sorry about that.

-CHB

Brenosalv commented 4 years ago

Ok, no problem. pyGNOME worked well even with some tests failures. Thank you a lot for your support.

Em qui., 3 de set. de 2020 às 15:06, Chris Barker notifications@github.com escreveu:

You're fine -- I'm afraid we're not good about keeping the "runslow" tests up to date. Sorry about that.

-CHB

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NOAA-ORR-ERD/PyGnome/issues/64#issuecomment-686660034, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOMZJUN7ELIZBNOY2AWNWJLSD7LJXANCNFSM4QM5KERA .

--

Atenciosamente,

Breno Salvador de Freitas

Graduando em Engenharia ElétricaUniversidade Federal de Campina Grande - UFCG