GenericMappingTools / pygmt

A Python interface for the Generic Mapping Tools.
https://www.pygmt.org
BSD 3-Clause "New" or "Revised" License
758 stars 220 forks source link

Numerous fails in pygmt.test() #367

Closed MarkWieczorek closed 5 years ago

MarkWieczorek commented 5 years ago

gmt version = 6.0.0 platform = macOS

In attempting to run the pygmt.test() suite, numerous tests fail. Here is the entire output.

Last login: Mon Nov  4 13:56:08 on ttys002
➜  ~ ipython
Python 3.7.5 (default, Nov  4 2019, 12:58:53)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.9.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import pygmt

In [2]: pygmt.test()
Loaded libgmt:
  binary dir: /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS
  cores: 24
  grid layout: rows
  library path: /usr/local/Cellar/gmt/6.0.0_1/lib/libgmt.dylib
  padding: 2
  plugin dir: /usr/local/Cellar/gmt/6.0.0_1/lib/gmt/plugins
  share dir: /usr/local/Cellar/gmt/6.0.0_1/share/gmt
  version: 6.0.0
=================================================== test session starts ===================================================
platform darwin -- Python 3.7.5, pytest-5.2.2, py-1.8.0, pluggy-0.13.0 -- /usr/local/opt/python/bin/python3.7
cachedir: .pytest_cache
Matplotlib: 3.1.1
Freetype: 2.6.1
rootdir: /Users/lunokhod
plugins: mpl-0.10
collected 200 items

SphericalHarmonics/pygmt/pygmt/base_plotting.py::pygmt.base_plotting.BasePlotting._preprocess PASSED                [  0%]
SphericalHarmonics/pygmt/pygmt/figure.py::pygmt.figure.Figure PASSED                                                [  1%]
SphericalHarmonics/pygmt/pygmt/clib/conversion.py::pygmt.clib.conversion._as_array PASSED                           [  1%]
SphericalHarmonics/pygmt/pygmt/clib/conversion.py::pygmt.clib.conversion.as_c_contiguous PASSED                     [  2%]
SphericalHarmonics/pygmt/pygmt/clib/conversion.py::pygmt.clib.conversion.dataarray_to_matrix PASSED                 [  2%]
SphericalHarmonics/pygmt/pygmt/clib/conversion.py::pygmt.clib.conversion.kwargs_to_ctypes_array PASSED              [  3%]
SphericalHarmonics/pygmt/pygmt/clib/conversion.py::pygmt.clib.conversion.vectors_to_arrays PASSED                   [  3%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session PASSED                                   [  4%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session._check_dtype_and_dim PASSED              [  4%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session.extract_region PASSED                    [  5%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session.get_libgmt_func PASSED                   [  5%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session.open_virtual_file PASSED                 [  6%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session.virtualfile_from_grid PASSED             [  6%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session.virtualfile_from_matrix PASSED           [  7%]
SphericalHarmonics/pygmt/pygmt/clib/session.py::pygmt.clib.session.Session.virtualfile_from_vectors PASSED          [  7%]
SphericalHarmonics/pygmt/pygmt/datasets/earth_relief.py::pygmt.datasets.earth_relief._is_valid_resolution PASSED    [  8%]
SphericalHarmonics/pygmt/pygmt/datasets/earth_relief.py::pygmt.datasets.earth_relief._shape_from_resolution PASSED  [  8%]
SphericalHarmonics/pygmt/pygmt/helpers/decorators.py::pygmt.helpers.decorators.fmt_docstring PASSED                 [  9%]
SphericalHarmonics/pygmt/pygmt/helpers/decorators.py::pygmt.helpers.decorators.kwargs_to_strings PASSED             [  9%]
SphericalHarmonics/pygmt/pygmt/helpers/decorators.py::pygmt.helpers.decorators.use_alias PASSED                     [ 10%]
SphericalHarmonics/pygmt/pygmt/helpers/tempfile.py::pygmt.helpers.tempfile.GMTTempFile PASSED                       [ 10%]
SphericalHarmonics/pygmt/pygmt/helpers/utils.py::pygmt.helpers.utils.build_arg_string PASSED                        [ 11%]
SphericalHarmonics/pygmt/pygmt/helpers/utils.py::pygmt.helpers.utils.data_kind PASSED                               [ 11%]
SphericalHarmonics/pygmt/pygmt/helpers/utils.py::pygmt.helpers.utils.dummy_context PASSED                           [ 12%]
SphericalHarmonics/pygmt/pygmt/helpers/utils.py::pygmt.helpers.utils.is_nonstr_iter PASSED                          [ 12%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_required_args PASSED                             [ 13%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap FAILED                                           [ 13%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_list_region PASSED                               [ 14%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_loglog FAILED                                    [ 14%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_power_axis FAILED                                [ 15%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_polar FAILED                                     [ 15%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_winkel_tripel FAILED                             [ 16%]
SphericalHarmonics/pygmt/pygmt/tests/test_basemap.py::test_basemap_aliases FAILED                                   [ 16%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_load_libgmt PASSED                                          [ 17%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_load_libgmt_fail PASSED                                     [ 17%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_get_clib_path PASSED                                        [ 18%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_check_libgmt PASSED                                         [ 18%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_clib_name PASSED                                            [ 19%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_getitem PASSED                                              [ 19%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_create_destroy_session PASSED                               [ 20%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_create_session_fails PASSED                                 [ 20%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_destroy_session_fails PASSED                                [ 21%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_call_module PASSED                                          [ 21%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_call_module_invalid_arguments PASSED                        [ 22%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_call_module_invalid_name PASSED                             [ 22%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_call_module_error_message PASSED                            [ 23%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_method_no_session PASSED                                    [ 23%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_parse_constant_single PASSED                                [ 24%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_parse_constant_composite PASSED                             [ 24%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_parse_constant_fails PASSED                                 [ 25%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_create_data_dataset PASSED                                  [ 25%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_create_data_grid_dim PASSED                                 [ 26%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_create_data_grid_range PASSED                               [ 26%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_create_data_fails PASSED                                    [ 27%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_vector PASSED                                           [ 27%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_vector_invalid_dtype PASSED                             [ 28%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_vector_wrong_column PASSED                              [ 28%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_vector_2d_fails PASSED                                  [ 29%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_matrix PASSED                                           [ 29%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_matrix_fails PASSED                                     [ 30%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_put_matrix_grid PASSED                                      [ 30%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtual_file PASSED                                         [ 31%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtual_file_fails PASSED                                   [ 31%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtual_file_bad_direction PASSED                           [ 32%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_vectors PASSED                             [ 32%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_transpose PASSED                   [ 33%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_diff_size PASSED                   [ 33%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_matrix PASSED                              [ 34%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_matrix_slice PASSED                        [ 34%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_pandas PASSED                      [ 35%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_arraylike PASSED                   [ 35%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_extract_region_fails PASSED                                 [ 36%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_extract_region_two_figures PASSED                           [ 36%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_write_data_fails PASSED                                     [ 37%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_dataarray_to_matrix_dims_fails PASSED                       [ 37%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_dataarray_to_matrix_inc_fails PASSED                        [ 38%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_get_default PASSED                                          [ 38%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_get_default_fails PASSED                                    [ 39%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_info_dict PASSED                                            [ 39%]
SphericalHarmonics/pygmt/pygmt/tests/test_clib.py::test_fails_for_wrong_version PASSED                              [ 40%]
SphericalHarmonics/pygmt/pygmt/tests/test_coast.py::test_coast FAILED                                               [ 40%]
SphericalHarmonics/pygmt/pygmt/tests/test_coast.py::test_coast_iceland FAILED                                       [ 41%]
SphericalHarmonics/pygmt/pygmt/tests/test_coast.py::test_coast_aliases FAILED                                       [ 41%]
SphericalHarmonics/pygmt/pygmt/tests/test_coast.py::test_coast_world_mercator FAILED                                [ 42%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_using_paper_coordinates PASSED                 [ 42%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_using_paper_coordinates_horizontal FAILED      [ 43%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_positioned_using_map_coordinates FAILED        [ 43%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_positioned_using_justification_code FAILED     [ 44%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_positioned_using_normalized_coords FAILED      [ 44%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box FAILED                                     [ 45%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box_with_pen FAILED                            [ 45%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box_with_fill FAILED                           [ 46%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box_with_clearance PASSED                      [ 46%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box_with_secondary_border FAILED               [ 47%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box_with_rounded_corners FAILED                [ 47%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_box_with_offset_background FAILED              [ 48%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_truncated_to_zlow_zhigh FAILED                 [ 48%]
SphericalHarmonics/pygmt/pygmt/tests/test_colorbar.py::test_colorbar_scaled_z_values FAILED                         [ 49%]
SphericalHarmonics/pygmt/pygmt/tests/test_contour.py::test_contour_fail_no_data PASSED                              [ 49%]
SphericalHarmonics/pygmt/pygmt/tests/test_contour.py::test_contour_vec FAILED                                       [ 50%]
SphericalHarmonics/pygmt/pygmt/tests/test_contour.py::test_contour_matrix FAILED                                    [ 50%]
SphericalHarmonics/pygmt/pygmt/tests/test_contour.py::test_contour_from_file PASSED                                 [ 51%]
SphericalHarmonics/pygmt/pygmt/tests/test_datasets.py::test_japan_quakes PASSED                                     [ 51%]
SphericalHarmonics/pygmt/pygmt/tests/test_datasets.py::test_sample_bathymetry PASSED                                [ 52%]
SphericalHarmonics/pygmt/pygmt/tests/test_datasets.py::test_usgs_quakes PASSED                                      [ 52%]
SphericalHarmonics/pygmt/pygmt/tests/test_datasets.py::test_earth_relief_fails PASSED                               [ 53%]
SphericalHarmonics/pygmt/pygmt/tests/test_datasets.py::test_earth_relief_60 PASSED                                  [ 53%]
SphericalHarmonics/pygmt/pygmt/tests/test_datasets.py::test_earth_relief_30 PASSED                                  [ 54%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_region PASSED                                      [ 54%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_region_multiple PASSED                             [ 55%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_region_country_codes PASSED                        [ 55%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_savefig_exists PASSED                              [ 56%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_savefig_transparent PASSED                         [ 56%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_savefig PASSED                                     [ 57%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_figure_show PASSED                                        [ 57%]
SphericalHarmonics/pygmt/pygmt/tests/test_figure.py::test_shift_origin PASSED                                       [ 58%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdcontour.py::test_grdcontour FAILED                                     [ 58%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdcontour.py::test_grdcontour_labels FAILED                              [ 59%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdcontour.py::test_grdcontour_slice FAILED                               [ 59%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdcontour.py::test_grdcontour_file PASSED                                [ 60%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdcontour.py::test_grdcontour_interval_file_full_opts PASSED             [ 60%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdcontour.py::test_grdcontour_fails PASSED                               [ 61%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdimage.py::test_grdimage FAILED                                         [ 61%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdimage.py::test_grdimage_slice FAILED                                   [ 62%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdimage.py::test_grdimage_file PASSED                                    [ 62%]
SphericalHarmonics/pygmt/pygmt/tests/test_grdimage.py::test_grdimage_fails PASSED                                   [ 63%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_unique_name PASSED                                       [ 63%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_kwargs_to_strings_fails PASSED                           [ 64%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_kwargs_to_strings_no_bools PASSED                        [ 64%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_gmttempfile PASSED                                       [ 65%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_gmttempfile_unique PASSED                                [ 65%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_gmttempfile_prefix_suffix PASSED                         [ 66%]
SphericalHarmonics/pygmt/pygmt/tests/test_helpers.py::test_gmttempfile_read PASSED                                  [ 66%]
SphericalHarmonics/pygmt/pygmt/tests/test_image.py::test_image FAILED                                               [ 67%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_info PASSED                                                 [ 67%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_info_c PASSED                                               [ 68%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_info_i PASSED                                               [ 68%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_info_c_i PASSED                                             [ 69%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_info_t PASSED                                               [ 69%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_info_fails PASSED                                           [ 70%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_grdinfo PASSED                                              [ 70%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_grdinfo_file PASSED                                         [ 71%]
SphericalHarmonics/pygmt/pygmt/tests/test_info.py::test_grdinfo_fails PASSED                                        [ 71%]
SphericalHarmonics/pygmt/pygmt/tests/test_legend.py::test_legend_position PASSED                                    [ 72%]
SphericalHarmonics/pygmt/pygmt/tests/test_legend.py::test_legend_entries FAILED                                     [ 72%]
SphericalHarmonics/pygmt/pygmt/tests/test_legend.py::test_legend_specfile FAILED                                    [ 73%]
SphericalHarmonics/pygmt/pygmt/tests/test_legend.py::test_legend_fails PASSED                                       [ 73%]
SphericalHarmonics/pygmt/pygmt/tests/test_logo.py::test_logo FAILED                                                 [ 74%]
SphericalHarmonics/pygmt/pygmt/tests/test_logo.py::test_logo_on_a_map FAILED                                        [ 74%]
SphericalHarmonics/pygmt/pygmt/tests/test_logo.py::test_logo_fails PASSED                                           [ 75%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_to_plot_points PASSED                            [ 75%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_to_plot_grid PASSED                              [ 76%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_to_plot_grid_scaled_with_series PASSED           [ 76%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_output_to_cpt_file PASSED                        [ 77%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_blank_output PASSED                              [ 77%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_invalid_output PASSED                            [ 78%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_truncated_to_zlow_zhigh PASSED                   [ 78%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_truncated_at_zlow_only PASSED                    [ 79%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_truncated_at_zhigh_only PASSED                   [ 79%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_reverse_color_only PASSED                        [ 80%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_reverse_zsign_only PASSED                        [ 80%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_reverse_color_and_zsign PASSED                   [ 81%]
SphericalHarmonics/pygmt/pygmt/tests/test_makecpt.py::test_makecpt_continuous PASSED                                [ 81%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_red_circles PASSED                                     [ 82%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_fail_no_data PASSED                                    [ 82%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_fail_size_color PASSED                                 [ 83%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_projection FAILED                                      [ 83%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_colors PASSED                                          [ 84%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_sizes PASSED                                           [ 84%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_colors_sizes PASSED                                    [ 85%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_colors_sizes_proj FAILED                               [ 85%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_matrix FAILED                                          [ 86%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_matrix_color PASSED                                    [ 86%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_from_file PASSED                                       [ 87%]
SphericalHarmonics/pygmt/pygmt/tests/test_plot.py::test_plot_vectors PASSED                                         [ 87%]
SphericalHarmonics/pygmt/pygmt/tests/test_psconvert.py::test_psconvert PASSED                                       [ 88%]
SphericalHarmonics/pygmt/pygmt/tests/test_psconvert.py::test_psconvert_twice PASSED                                 [ 88%]
SphericalHarmonics/pygmt/pygmt/tests/test_psconvert.py::test_psconvert_int_options PASSED                           [ 89%]
SphericalHarmonics/pygmt/pygmt/tests/test_psconvert.py::test_psconvert_aliases PASSED                               [ 89%]
SphericalHarmonics/pygmt/pygmt/tests/test_session_management.py::test_begin_end PASSED                              [ 90%]
SphericalHarmonics/pygmt/pygmt/tests/test_sphinx_gallery.py::test_pygmtscraper SKIPPED                              [ 90%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_input_file PASSED                                [ 91%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_input_data_array PASSED                          [ 91%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_input_xyz PASSED                                 [ 92%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_input_xy_no_z PASSED                             [ 92%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_wrong_kind_of_input PASSED                       [ 93%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_with_outfile_param PASSED                        [ 93%]
SphericalHarmonics/pygmt/pygmt/tests/test_surface.py::test_surface_short_aliases PASSED                             [ 94%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_single_line_of_text FAILED                             [ 94%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_multiple_lines_of_text FAILED                          [ 95%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_without_text_input PASSED                              [ 95%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_input_single_filename FAILED                           [ 96%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_input_multiple_filenames FAILED                        [ 96%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_nonexistent_filename PASSED                            [ 97%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_angle_30 PASSED                                        [ 97%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_font_bold FAILED                                       [ 98%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_justify_bottom_right_and_top_left FAILED               [ 98%]
SphericalHarmonics/pygmt/pygmt/tests/test_text.py::test_text_justify_parsed_from_textfile PASSED                    [ 99%]
SphericalHarmonics/pygmt/pygmt/tests/test_which.py::test_which PASSED                                               [ 99%]
SphericalHarmonics/pygmt/pygmt/tests/test_which.py::test_which_fails PASSED                                         [100%]

======================================================== FAILURES =========================================================
______________________________________________________ test_basemap _______________________________________________________
Error: Image files did not match.
  RMS Value: 4.964099248492272
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpnxj57uci/baseline-test_basemap.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpnxj57uci/test_basemap.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpnxj57uci/test_basemap-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_basemap_loglog ___________________________________________________
Error: Image files did not match.
  RMS Value: 5.846989285755394
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpt7dgtqrc/baseline-test_basemap_loglog.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpt7dgtqrc/test_basemap_loglog.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpt7dgtqrc/test_basemap_loglog-failed-diff.png
  Tolerance:
    2
_________________________________________________ test_basemap_power_axis _________________________________________________
Error: Image files did not match.
  RMS Value: 10.20590511959233
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp99oojcsq/baseline-test_basemap_power_axis.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp99oojcsq/test_basemap_power_axis.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp99oojcsq/test_basemap_power_axis-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_basemap_polar ____________________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x139d05ad0>, filename = 'test_basemap_polar.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp1sze83w2'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp1sze83w2/test_basemap_polar.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_basemap_polar.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp1sze83w2/baseline-test_basemap_polar.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1822, 1958, 3) actual size (1822, 1957, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_______________________________________________ test_basemap_winkel_tripel ________________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x13907d110>, filename = 'test_basemap_winkel_tripel.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpmyfozaf2'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpmyfozaf2/test_basemap_winkel_tripel.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_basemap_winkel_tripel.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpmyfozaf2/baseline-test_basemap_winkel_tripel.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1958, 3128, 3) actual size (1959, 3128, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
__________________________________________________ test_basemap_aliases ___________________________________________________
Error: Image files did not match.
  RMS Value: 5.625639683362874
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmph6lpu88b/baseline-test_basemap_aliases.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmph6lpu88b/test_basemap_aliases.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmph6lpu88b/test_basemap_aliases-failed-diff.png
  Tolerance:
    2
_______________________________________________________ test_coast ________________________________________________________
Error: Image files did not match.
  RMS Value: 8.704740946599626
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7mcbmlv4/baseline-test_coast.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7mcbmlv4/test_coast.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7mcbmlv4/test_coast-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_coast_iceland ____________________________________________________
Error: Image files did not match.
  RMS Value: 7.312640236814033
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpevmvdfof/baseline-test_coast_iceland.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpevmvdfof/test_coast_iceland.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpevmvdfof/test_coast_iceland-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_coast_aliases ____________________________________________________
Error: Image files did not match.
  RMS Value: 10.629549626739427
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp9kk4bur4/baseline-test_coast_aliases.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp9kk4bur4/test_coast_aliases.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp9kk4bur4/test_coast_aliases-failed-diff.png
  Tolerance:
    2
________________________________________________ test_coast_world_mercator ________________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x139278f90>, filename = 'test_coast_world_mercator.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpu5uk1zcp'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpu5uk1zcp/test_coast_world_mercator.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_coast_world_mercator.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpu5uk1zcp/baseline-test_coast_world_mercator.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (2477, 3289, 3) actual size (2478, 3289, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
____________________________________ test_colorbar_using_paper_coordinates_horizontal _____________________________________
Error: Image files did not match.
  RMS Value: 19.42427219463139
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp_iyb27to/baseline-test_colorbar_using_paper_coordinates_horizontal.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp_iyb27to/test_colorbar_using_paper_coordinates_horizontal.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp_iyb27to/test_colorbar_using_paper_coordinates_horizontal-failed-diff.png
  Tolerance:
    2
_____________________________________ test_colorbar_positioned_using_map_coordinates ______________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x1390c0150>
filename = 'test_colorbar_positioned_using_map_coordinates.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpw9kotjdr'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpw9kotjdr/test_colorbar_positioned_using_map_coordinates.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_colorbar_positioned_using_map_coordinates.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpw9kotjdr/baseline-test_colorbar_positioned_using_map_coordinates.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (631, 779, 3) actual size (631, 778, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
____________________________________ test_colorbar_positioned_using_justification_code ____________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x1390eac90>
filename = 'test_colorbar_positioned_using_justification_code.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpi2vp2uu_'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpi2vp2uu_/test_colorbar_positioned_using_justification_code.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_colorbar_positioned_using_justification_code.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpi2vp2uu_/baseline-test_colorbar_positioned_using_justification_code.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (631, 779, 3) actual size (631, 778, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
____________________________________ test_colorbar_positioned_using_normalized_coords _____________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x17cb4f850>
filename = 'test_colorbar_positioned_using_normalized_coords.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp32t1f4qp'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp32t1f4qp/test_colorbar_positioned_using_normalized_coords.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_colorbar_positioned_using_normalized_coords.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp32t1f4qp/baseline-test_colorbar_positioned_using_normalized_coords.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (631, 779, 3) actual size (631, 778, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
____________________________________________________ test_colorbar_box ____________________________________________________
Error: Image files did not match.
  RMS Value: 5.838918358686898
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmptqq9lt8e/baseline-test_colorbar_box.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmptqq9lt8e/test_colorbar_box.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmptqq9lt8e/test_colorbar_box-failed-diff.png
  Tolerance:
    2
_______________________________________________ test_colorbar_box_with_pen ________________________________________________
Error: Image files did not match.
  RMS Value: 5.838918358686898
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpncwdcwn1/baseline-test_colorbar_box_with_pen.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpncwdcwn1/test_colorbar_box_with_pen.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpncwdcwn1/test_colorbar_box_with_pen-failed-diff.png
  Tolerance:
    2
_______________________________________________ test_colorbar_box_with_fill _______________________________________________
Error: Image files did not match.
  RMS Value: 8.940114781542432
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmptsx68ioj/baseline-test_colorbar_box_with_fill.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmptsx68ioj/test_colorbar_box_with_fill.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmptsx68ioj/test_colorbar_box_with_fill-failed-diff.png
  Tolerance:
    2
_________________________________________ test_colorbar_box_with_secondary_border _________________________________________
Error: Image files did not match.
  RMS Value: 5.84166772588697
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7a3fb1aw/baseline-test_colorbar_box_with_secondary_border.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7a3fb1aw/test_colorbar_box_with_secondary_border.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7a3fb1aw/test_colorbar_box_with_secondary_border-failed-diff.png
  Tolerance:
    2
_________________________________________ test_colorbar_box_with_rounded_corners __________________________________________
Error: Image files did not match.
  RMS Value: 16.25882027928954
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmphcgd36ms/baseline-test_colorbar_box_with_rounded_corners.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmphcgd36ms/test_colorbar_box_with_rounded_corners.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmphcgd36ms/test_colorbar_box_with_rounded_corners-failed-diff.png
  Tolerance:
    2
________________________________________ test_colorbar_box_with_offset_background _________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x139d14f10>
filename = 'test_colorbar_box_with_offset_background.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpq0iqco_2'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpq0iqco_2/test_colorbar_box_with_offset_background.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_colorbar_box_with_offset_background.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpq0iqco_2/baseline-test_colorbar_box_with_offset_background.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [127, 127, 127],
        [127, 127, 127],
        [127, 127, 127]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [127, 127, 127],
        [127, 127, 127],
        [127, 127, 127]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (192, 170, 3) actual size (193, 170, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
__________________________________________ test_colorbar_truncated_to_zlow_zhigh __________________________________________
Error: Image files did not match.
  RMS Value: 2.6233023707661216
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp_8hz3opf/baseline-test_colorbar_truncated_to_zlow_zhigh.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp_8hz3opf/test_colorbar_truncated_to_zlow_zhigh.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp_8hz3opf/test_colorbar_truncated_to_zlow_zhigh-failed-diff.png
  Tolerance:
    2
______________________________________________ test_colorbar_scaled_z_values ______________________________________________
Error: Image files did not match.
  RMS Value: 2.8287796959086546
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpl49xzz6l/baseline-test_colorbar_scaled_z_values.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpl49xzz6l/test_colorbar_scaled_z_values.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpl49xzz6l/test_colorbar_scaled_z_values-failed-diff.png
  Tolerance:
    2
____________________________________________________ test_contour_vec _____________________________________________________
Error: Image files did not match.
  RMS Value: 5.291137851938542
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpigj_6rs5/baseline-test_contour_vec.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpigj_6rs5/test_contour_vec.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpigj_6rs5/test_contour_vec-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_contour_matrix ___________________________________________________
Error: Image files did not match.
  RMS Value: 5.9356649049332555
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpbiabobv8/baseline-test_contour_matrix.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpbiabobv8/test_contour_matrix.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpbiabobv8/test_contour_matrix-failed-diff.png
  Tolerance:
    2
_____________________________________________________ test_grdcontour _____________________________________________________
Error: Image files did not match.
  RMS Value: 14.462664619795119
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmppxa2mqxe/baseline-test_grdcontour.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmppxa2mqxe/test_grdcontour.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmppxa2mqxe/test_grdcontour-failed-diff.png
  Tolerance:
    2
_________________________________________________ test_grdcontour_labels __________________________________________________
Error: Image files did not match.
  RMS Value: 17.560924700702785
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp5xc7ion9/baseline-test_grdcontour_labels.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp5xc7ion9/test_grdcontour_labels.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp5xc7ion9/test_grdcontour_labels-failed-diff.png
  Tolerance:
    2
__________________________________________________ test_grdcontour_slice __________________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x17cb938d0>, filename = 'test_grdcontour_slice.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpcfwz2co1'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpcfwz2co1/test_grdcontour_slice.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_grdcontour_slice.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpcfwz2co1/baseline-test_grdcontour_slice.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [ 84,  84,  84],
     ...[255, 255, 255],
        ...,
        [  0,   0,   0],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [ 84,  84,  84],
        [  0,   0,   0],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (313, 1801, 3) actual size (313, 1800, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
______________________________________________________ test_grdimage ______________________________________________________
Error: Image files did not match.
  RMS Value: 116.17132352906128
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpztqtvpp7/baseline-test_grdimage.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpztqtvpp7/test_grdimage.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpztqtvpp7/test_grdimage-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_grdimage_slice ___________________________________________________
Error: Image files did not match.
  RMS Value: 124.08359053217427
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp9a0j3nfg/baseline-test_grdimage_slice.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp9a0j3nfg/test_grdimage_slice.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp9a0j3nfg/test_grdimage_slice-failed-diff.png
  Tolerance:
    2
_______________________________________________________ test_image ________________________________________________________
Error: Image files did not match.
  RMS Value: 10.05261203177259
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpbzdas77o/baseline-test_image.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpbzdas77o/test_image.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpbzdas77o/test_image-failed-diff.png
  Tolerance:
    2
___________________________________________________ test_legend_entries ___________________________________________________
Error: Image files did not match.
  RMS Value: 2.225655725930977
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpoa22izko/baseline-test_legend_entries.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpoa22izko/test_legend_entries.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpoa22izko/test_legend_entries-failed-diff.png
  Tolerance:
    2
__________________________________________________ test_legend_specfile ___________________________________________________
Error: Image files did not match.
  RMS Value: 8.411212531618183
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7kv5rw3c/baseline-test_legend_specfile.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7kv5rw3c/test_legend_specfile.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp7kv5rw3c/test_legend_specfile-failed-diff.png
  Tolerance:
    2
________________________________________________________ test_logo ________________________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x188a7cc50>, filename = 'test_logo.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp4kxudl4z'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp4kxudl4z/test_logo.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_logo.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp4kxudl4z/baseline-test_logo.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (304, 600, 3) actual size (288, 600, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
___________________________________________________ test_logo_on_a_map ____________________________________________________
Error: Image files did not match.
  RMS Value: 12.156992646205023
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpie5xhah9/baseline-test_logo_on_a_map.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpie5xhah9/test_logo_on_a_map.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpie5xhah9/test_logo_on_a_map-failed-diff.png
  Tolerance:
    2
__________________________________________________ test_plot_projection ___________________________________________________
Error: Image files did not match.
  RMS Value: 11.060085334359124
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpm_tqq6ja/baseline-test_plot_projection.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpm_tqq6ja/test_plot_projection.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpm_tqq6ja/test_plot_projection-failed-diff.png
  Tolerance:
    2
_______________________________________________ test_plot_colors_sizes_proj _______________________________________________
Error: Image files did not match.
  RMS Value: 9.358189453913978
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp5dkrfmpa/baseline-test_plot_colors_sizes_proj.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp5dkrfmpa/test_plot_colors_sizes_proj.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp5dkrfmpa/test_plot_colors_sizes_proj-failed-diff.png
  Tolerance:
    2
____________________________________________________ test_plot_matrix _____________________________________________________
Error: Image files did not match.
  RMS Value: 9.62815950893391
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpg1irqgoz/baseline-test_plot_matrix.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpg1irqgoz/test_plot_matrix.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpg1irqgoz/test_plot_matrix-failed-diff.png
  Tolerance:
    2
______________________________________________ test_text_single_line_of_text ______________________________________________

args = (), kwargs = {'projection': 'x4i', 'region': [0, 5, 0, 2.5]}
baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline', baseline_remote = False
fig = <pygmt.figure.Figure object at 0x17cbcd810>, filename = 'test_text_single_line_of_text.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpgztm09ei'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpgztm09ei/test_text_single_line_of_text.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_text_single_line_of_text.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpgztm09ei/baseline-test_text_single_line_of_text.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [  0,   0,   0],
        [  0,   0,   0],
        [ 84,  84,  84]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (52, 426, 3) actual size (55, 426, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
____________________________________________ test_text_multiple_lines_of_text _____________________________________________

args = (), kwargs = {'projection': 'x4i', 'region': [0, 5, 0, 2.5]}
baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline', baseline_remote = False
fig = <pygmt.figure.Figure object at 0x13919e1d0>, filename = 'test_text_multiple_lines_of_text.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpw_t0h5n5'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpw_t0h5n5/test_text_multiple_lines_of_text.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_text_multiple_lines_of_text.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpw_t0h5n5/baseline-test_text_multiple_lines_of_text.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [  0,   0,   0],
        [  0,   0,   0],
        [ 84,  84,  84]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (412, 977, 3) actual size (415, 977, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_____________________________________________ test_text_input_single_filename _____________________________________________
Error: Image files did not match.
  RMS Value: 10.659328202724806
  Expected:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp72ixj8ao/baseline-test_text_input_single_filename.png
  Actual:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp72ixj8ao/test_text_input_single_filename.png
  Difference:
    /var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp72ixj8ao/test_text_input_single_filename-failed-diff.png
  Tolerance:
    2
___________________________________________ test_text_input_multiple_filenames ____________________________________________

args = (), kwargs = {}, baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x188a82590>
filename = 'test_text_input_multiple_filenames.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp53guso9t'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp53guso9t/test_text_input_multiple_filenames.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_text_input_multiple_filenames.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmp53guso9t/baseline-test_text_input_multiple_filenames.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1541, 1601, 3) actual size (1527, 1601, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
___________________________________________________ test_text_font_bold ___________________________________________________

args = (), kwargs = {'projection': 'x4i', 'region': [0, 5, 0, 2.5]}
baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline', baseline_remote = False
fig = <pygmt.figure.Figure object at 0x188af0650>, filename = 'test_text_font_bold.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpn5r02kqc'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpn5r02kqc/test_text_font_bold.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_text_font_bold.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpn5r02kqc/baseline-test_text_font_bold.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (52, 263, 3) actual size (55, 263, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_______________________________________ test_text_justify_bottom_right_and_top_left _______________________________________

args = (), kwargs = {'projection': 'x4i', 'region': [0, 5, 0, 2.5]}
baseline_dir = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline', baseline_remote = False
fig = <pygmt.figure.Figure object at 0x13914bc50>, filename = 'test_text_justify_bottom_right_and_top_left.png'
result_dir = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpdmmjhztp'
test_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpdmmjhztp/test_text_justify_bottom_right_and_top_left.png'
baseline_image_ref = '/Users/lunokhod/SphericalHarmonics/pygmt/pygmt/tests/baseline/test_text_justify_bottom_right_and_top_left.png'
baseline_image = '/var/folders/9v/k2l3cpqs1wn56ktkr4l5zncr0000gn/T/tmpdmmjhztp/baseline-test_text_justify_bottom_right_and_top_left.png'

    @wraps(item.function)
    def item_function_wrapper(*args, **kwargs):

        baseline_dir = compare.kwargs.get('baseline_dir', None)
        if baseline_dir is None:
            if self.baseline_dir is None:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
            else:
                baseline_dir = self.baseline_dir
            baseline_remote = False
        else:
            baseline_remote = baseline_dir.startswith(('http://', 'https://'))
            if not baseline_remote:
                baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

        with plt.style.context(style, after_reset=True), switch_backend(backend):

            # Run test and get figure object
            if inspect.ismethod(original):  # method
                # In some cases, for example if setup_method is used,
                # original appears to belong to an instance of the test
                # class that is not the same as args[0], and args[0] is the
                # one that has the correct attributes set up from setup_method
                # so we ignore original.__self__ and use args[0] instead.
                fig = original.__func__(*args, **kwargs)
            else:  # function
                fig = original(*args, **kwargs)

            if remove_text:
                remove_ticks_and_titles(fig)

            # Find test name to use as plot name
            filename = compare.kwargs.get('filename', None)
            if filename is None:
                filename = item.name + '.png'
                filename = filename.replace('[', '_').replace(']', '_')
                filename = filename.replace('/', '_')
                filename = filename.replace('_.png', '.png')

            # What we do now depends on whether we are generating the
            # reference images or simply running the test.
            if self.generate_dir is None:

                # Save the figure
                result_dir = tempfile.mkdtemp(dir=self.results_dir)
                test_image = os.path.abspath(os.path.join(result_dir, filename))

                fig.savefig(test_image, **savefig_kwargs)
                close_mpl_figure(fig)

                # Find path to baseline image
                if baseline_remote:
                    baseline_image_ref = _download_file(baseline_dir, filename)
                else:
                    baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

                if not os.path.exists(baseline_image_ref):
                    pytest.fail("Image file not found for comparison test in: "
                                "\n\t{baseline_dir}"
                                "\n(This is expected for new tests.)\nGenerated Image: "
                                "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

                # distutils may put the baseline images in non-accessible places,
                # copy to our tmpdir to be sure to keep them in case of failure
                baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
                shutil.copyfile(baseline_image_ref, baseline_image)

>               msg = compare_images(baseline_image, test_image, tol=tolerance)

/usr/local/lib/python3.7/site-packages/pytest_mpl/plugin.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
    rms = calculate_rms(expected_image, actual_image)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
        [255, 255, 255],
        [255, 255, 255],
        ...,
        [255, 255, 255],
     ...[255, 255, 255],
        ...,
        [255, 255, 255],
        [255, 255, 255],
        [255, 255, 255]]], dtype=int16)

    def calculate_rms(expected_image, actual_image):
        "Calculate the per-pixel errors, then compute the root mean square error."
        if expected_image.shape != actual_image.shape:
            raise ImageComparisonFailure(
                "Image sizes do not match expected size: {} "
>               "actual size {}".format(expected_image.shape, actual_image.shape))
E           matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (89, 969, 3) actual size (102, 968, 3)

/usr/local/lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
==================================================== warnings summary =====================================================
/usr/local/lib/python3.7/site-packages/_pytest/mark/structures.py:325
  /usr/local/lib/python3.7/site-packages/_pytest/mark/structures.py:325: PytestUnknownMarkWarning: Unknown pytest.mark.mpl_image_compare - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,

-- Docs: https://docs.pytest.org/en/latest/warnings.html
============================ 43 failed, 156 passed, 1 skipped, 1 warnings in 65.67s (0:01:05) =============================
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-2-3b88ed9e84c1> in <module>
----> 1 pygmt.test()

~/SphericalHarmonics/pygmt/pygmt/__init__.py in test(doctest, verbose, coverage, figures)
     97     args.append(package)
     98     status = pytest.main(args)
---> 99     assert status == 0, "Some tests have failed."

AssertionError: Some tests have failed.
weiji14 commented 5 years ago

The failures are due to the generated images being slightly different to the baseline images, which guards us from possible regressions/changes to libraries upstream. It's not a major major issue, but I thought I've fixed them recently in #352, though I was using Linux and maybe there's some differences between Linux and MacOS? There have been some recent updates to the earth relief grids and you might need to clear the cache using gmt clear data, which should hopefully fix the grdimage errors (but it'll take some time to redownload the files).

I see you're using GMT6.0.0, but could you try to find out what version of pygmt you've installed? Do something like pip list | grep pygmt, or git describe in the pygmt local repository.

MarkWieczorek commented 5 years ago

I tried gmt clear data and reran the tests, but got the same result.

I'm working off of the last commit on master.

weiji14 commented 5 years ago

Could you try to find a good example of the failed images under /var/folders, and post the expected, actual and difference PNG files? Just want to see what's the actual difference.

MarkWieczorek commented 5 years ago

Here is test_grdimage-failed-diff.png

test_grdimage-failed-diff

MarkWieczorek commented 5 years ago

And here is test_basemap-failed-diff.png

test_basemap-failed-diff

weiji14 commented 5 years ago

Ok, we're hitting similar test failures in #293 after merging in changes from master, but I don't quite understand why the other PRs which are up to date with master don't have failed tests? Edit: I've looked more closely at the errors and they were are totally unrelated. I can't quite make sense of the grdimage difference though, as it seems that only the landmasses are affected... Do you have a non-grid example? E.g. from text.

MarkWieczorek commented 5 years ago

test_text_input_single_filename-failed-diff test_text_input_single_filename-failed-diff

weiji14 commented 5 years ago

Could you post your ghostscript version using ghostscript --version? This reminds me of a similar issue back in #315. Mine is 9.27, Edit: Sorry, maybe use conda list | grep ghostscript instead, which gives me 9.22.

MarkWieczorek commented 5 years ago
> which gs
/usr/local/bin/gs
> ls -al /usr/local/bin/gs
lrwxr-xr-x  1 lunokhod  admin  33 Nov  4 13:05 /usr/local/bin/gs@ -> ../Cellar/ghostscript/9.50/bin/gs
> gs --version
9.50
weiji14 commented 5 years ago

Version 9.50 aye, right, I think they're trying to get it to work upstream at https://github.com/GenericMappingTools/gmt/pull/1993...

seisman commented 5 years ago

GenericMappingTools/gmt#1993 is trying to include the gs 9.50 into the macOS bundle. It's not related to this issue.

PaulWessel commented 5 years ago

FYI, the earth_relief_xxy grids have all changed (except 01s, 03s) so if your originals predate the new releases then new original plots are needed.

weiji14 commented 5 years ago

@MarkWieczorek, could you try again with pygmt.test(figures=False)? If it all passes then I think we'll close this issue as the image differences are rather minor and can actually be safely ignored.

MarkWieczorek commented 5 years ago

Ignoring the figures, I now only get one failure:

======================================================== FAILURES ========================================================
_____________________________________________ test_call_module_error_message _____________________________________________

    def test_call_module_error_message():
        "Check is the GMT error message was captured."
        with clib.Session() as lib:
            try:
>               lib.call_module("info", "bogus-data.bla")

pygmt/tests/test_clib.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <pygmt.clib.session.Session object at 0x15807b390>, module = 'info', args = 'bogus-data.bla'

    def call_module(self, module, args):
        """
        Call a GMT module with the given arguments.

        Makes a call to ``GMT_Call_Module`` from the C API using mode
        ``GMT_MODULE_CMD`` (arguments passed as a single string).

        Most interactions with the C API are done through this function.

        Parameters
        ----------
        module : str
            Module name (``'coast'``, ``'basemap'``, etc).
        args : str
            String with the command line arguments that will be passed to the
            module (for example, ``'-R0/5/0/10 -JM'``).

        Raises
        ------
        GMTCLibError
            If the returned status code of the function is non-zero.

        """
        c_call_module = self.get_libgmt_func(
            "GMT_Call_Module",
            argtypes=[ctp.c_void_p, ctp.c_char_p, ctp.c_int, ctp.c_void_p],
            restype=ctp.c_int,
        )

        mode = self["GMT_MODULE_CMD"]
        status = c_call_module(
            self.session_pointer, module.encode(), mode, args.encode()
        )
        if status != 0:
            raise GMTCLibError(
                "Module '{}' failed with status code {}:\n{}".format(
>                   module, status, self._error_message
                )
            )
E           pygmt.exceptions.GMTCLibError: Module 'info' failed with status code 71:
E           pygmt-session [ERROR]: GMT_COMPATIBILITY: Expects values from 6 to 6; reset to 6.
E           gmtinfo [ERROR]: Error for input file: No such file (bogus-data.bla)

pygmt/clib/session.py:489: GMTCLibError

During handling of the above exception, another exception occurred:

    def test_call_module_error_message():
        "Check is the GMT error message was captured."
        with clib.Session() as lib:
            try:
                lib.call_module("info", "bogus-data.bla")
            except GMTCLibError as error:
                msg = "\n".join(
                    [
                        "Module 'info' failed with status code 71:",
                        "gmtinfo [ERROR]: Error for input file: No such file (bogus-data.bla)",
                    ]
                )
>               assert str(error) == msg
E               assert "Module 'info...gus-data.bla)" == "Module 'info...gus-data.bla)"
E                   Module 'info' failed with status code 71:
E                 - pygmt-session [ERROR]: GMT_COMPATIBILITY: Expects values from 6 to 6; reset to 6.
E                   gmtinfo [ERROR]: Error for input file: No such file (bogus-data.bla)

pygmt/tests/test_clib.py:218: AssertionError
-------------------------------------------------- Captured stderr call --------------------------------------------------
pygmt-session [ERROR]: GMT_COMPATIBILITY: Expects values from 6 to 6; reset to 6.
gmtinfo [ERROR]: Error for input file: No such file (bogus-data.bla)
weiji14 commented 5 years ago

Ok, I think that's related with your issue at #365

MarkWieczorek commented 5 years ago

If I set GMT_COMPATIBILITY = 6 in the .gmt/gmt.conf file, this does indeed pass.

weiji14 commented 5 years ago

Happy to close this then?

MarkWieczorek commented 5 years ago

Sure, but perhaps you should make figures=False the default until this is worked out.