Closed tjcrone closed 4 years ago
I can't reproduce your corruption locally. You can add V='d'
to basemap and plot to print more debug messages, i.e.
fig.basemap(region=region, projection="X6i", frame=True, V='d')
fig.plot(x, y, style="i0.5c", color="black", V='d')
Thank you for the help @seisman. Here is the output with V='d':
plot [DEBUG]: Look for file 150/240/-10/60 in /home/jovyan/.gmt
plot [DEBUG]: Look for file 150/240/-10/60 in /home/jovyan/.gmt/cache
plot [DEBUG]: Look for file 150/240/-10/60 in /home/jovyan/.gmt/server
plot [DEBUG]: Look for file 150/240/-10/60 in /home/jovyan/.gmt/server/srtm1
plot [DEBUG]: Look for file 150/240/-10/60 in /home/jovyan/.gmt/server/srtm3
plot [DEBUG]: Got regular w/e/s/n for region (150/240/-10/60)
plot [INFORMATION]: Processing input table data
plot [DEBUG]: Operation will require 2 input columns [n_cols_start = 2]
plot [DEBUG]: Projected values in meters: 150 240 -10 60
plot [INFORMATION]: Map scale is 0.00590551 km per cm or 1:590.551.
plot [DEBUG]: Running in PS mode modern
plot [DEBUG]: Use PS filename /home/jovyan/.gmt/sessions/gmt6.4949/gmt_1.ps-
plot [DEBUG]: Append to hidden PS file /home/jovyan/.gmt/sessions/gmt6.4949/gmt_1.ps-
plot [DEBUG]: Got session name as pygmt-session and default graphics formats as pdf
plot [DEBUG]: api_init_import: Passed family = Data Table and geometry = Point
psxy (api_init_import): tried to free unallocated memory
plot [DEBUG]: api_init_import: Added 1 new sources
plot [DEBUG]: GMT_Init_IO: Returned first Input object ID = 0
plot [DEBUG]: GMT_Begin_IO: Mode value 1 not considered (ignored)
plot [DEBUG]: GMT_Begin_IO: Initialize record-by-record access for Input
plot [DEBUG]: api_next_io_source: Selected object 0
plot [INFORMATION]: Reading Data Table from Input memory location via vector
plot [DEBUG]: GMT_Begin_IO: Input resource access is now enabled [record-by-record]
plot [DEBUG]: GMT_End_IO: Input resource access is now disabled
plot [DEBUG]: Current size of half-baked PS file /home/jovyan/.gmt/sessions/gmt6.4949/gmt_1.ps- = 26061.
psxy (gmt_freepen): tried to free unallocated memory
psxy (gmt_freepen): tried to free unallocated memory
psxy (gmtlib_free_tmp_arrays): tried to free unallocated memory
I ran pygmt.test() and got a core dump on open_virtual_file:
(notebook) jovyan@jupyter-tjcrone:~$ python
Python 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygmt
>>> pygmt.test()
Loaded libgmt:
binary dir: /srv/conda/envs/notebook/bin
cores: 2
grid layout: rows
library path: /srv/conda/envs/notebook/lib/libgmt.so
padding: 2
plugin dir: /srv/conda/envs/notebook/lib/gmt/plugins
share dir: /srv/conda/envs/notebook/share/gmt
version: 6.0.0
================================================== test session starts ===================================================
platform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.12.0 -- /srv/conda/envs/notebook/bin/python
cachedir: .pytest_cache
Matplotlib: 3.1.3
Freetype: 2.10.0
rootdir: /home/jovyan
plugins: mpl-0.11
collected 212 items
base_plotting.py::pygmt.base_plotting.BasePlotting._preprocess <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/base_plotting.py PASSED [ 0%]
figure.py::pygmt.figure.Figure <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/figure.py PASSED [ 0%]
clib/conversion.py::pygmt.clib.conversion._as_array <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/conversion.py PASSED [ 1%]
clib/conversion.py::pygmt.clib.conversion.as_c_contiguous <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/conversion.py PASSED [ 1%]
clib/conversion.py::pygmt.clib.conversion.dataarray_to_matrix <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/conversion.py PASSED [ 2%]
clib/conversion.py::pygmt.clib.conversion.kwargs_to_ctypes_array <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/conversion.py PASSED [ 2%]
clib/conversion.py::pygmt.clib.conversion.vectors_to_arrays <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/conversion.py PASSED [ 3%]
clib/session.py::pygmt.clib.session.Session <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py PASSED [ 3%]
clib/session.py::pygmt.clib.session.Session._check_dtype_and_dim <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py PASSED [ 4%]
clib/session.py::pygmt.clib.session.Session.extract_region <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py PASSED [ 4%]
clib/session.py::pygmt.clib.session.Session.get_libgmt_func <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py PASSED [ 5%]
clib/session.py::pygmt.clib.session.Session.open_virtual_file <- ../../srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py Fatal Python error: Aborted
Current thread 0x00007fa769b6b740 (most recent call first):
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py", line 403 in destroy
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/clib/session.py", line 196 in __exit__
File "<doctest pygmt.clib.session.Session.open_virtual_file[5]>", line 19 in <module>
File "/srv/conda/envs/notebook/lib/python3.7/doctest.py", line 1329 in __run
File "/srv/conda/envs/notebook/lib/python3.7/doctest.py", line 1475 in run
File "/srv/conda/envs/notebook/lib/python3.7/doctest.py", line 1836 in run
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/doctest.py", line 235 in runtest
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 134 in pytest_runtest_call
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 210 in <lambda>
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 237 in from_call
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 210 in call_runtest_hook
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 185 in call_and_report
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 99 in runtestprotocol
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/runner.py", line 84 in pytest_runtest_protocol
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/main.py", line 271 in pytest_runtestloop
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/main.py", line 247 in _main
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/main.py", line 197 in wrap_session
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/main.py", line 240 in pytest_cmdline_main
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/_pytest/config/__init__.py", line 93 in main
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/pygmt/__init__.py", line 98 in test
File "<stdin>", line 1 in <module>
Aborted (core dumped)
I searched the Google machine for gmt "tried to free" memory
, and got quite a few hits. Seems like a common bug varietal in various GMT modules over the years. I wonder why I am getting it when you are not @seisman. Is there an easy way that I can make sure I am using the same version of code that you are using?
Something else I am suspecting perhaps is the Linux kernel we are on, or the distribution, or some other library. Any suggestions you have would be greatly appreciated! Thanks.
Hard to say why it corrupts for you. pygmt is tested on macOS and Linux nightly, and we don't see any corruptions.
The pygmt.test()
says Fatal Python error: Aborted
. Is it possible something wrong with the Python part (i.e. other broken package)?
Well, pretty sure this is a segmentation fault in one of the modules. I think the key clue is:
psxy (gmt_freepen): tried to free unallocated memory
psxy (gmt_freepen): tried to free unallocated memory
psxy (gmtlib_free_tmp_arrays): tried to free unallocated memory
Would be great to know if there is a difference between the GMT library I have and the one you have. How can we check this?
Those are GMT debug messages. It doesn't mean GMT corrupts.
You can check the detailed version information with:
import pygmt
pygmt.print_clib_info()
Thanks! What is your output from that command?
@leouieda, any thoughts on this? It seems clear that psxy is trying to free unallocated memory leading to a double free, and Python does not like that. The python interpreter rarely crashes out like this, but it can do so when a C program does something bad with malloc. I wonder if the version of Python we are on is not as lenient about stuff like this as the one you are using for your tests. I am hoping to determine what versions (i.e. builds) of the GMT library you are using, as well as Python. Can you help with this? Thanks!
This is my output:
Loaded libgmt:
binary dir: /Users/seisman/.anaconda/envs/pygmt/bin
cores: 8
grid layout: rows
library path: /Users/seisman/.anaconda/envs/pygmt/lib/libgmt.dylib
padding: 2
plugin dir: /Users/seisman/.anaconda/envs/pygmt/lib/gmt/plugins
share dir: /Users/seisman/.anaconda/envs/pygmt/share/gmt
version: 6.0.0
@tjcrone ouch. I love how interfacing with C code keeps coming back to bite us :-) The most frustrating part is that these crashes seems to be somewhat random.
I wonder if the version of Python we are on is not as lenient about stuff like this as the one you are using for your tests
That's probably not it since we test with the latest Python 3.7 from conda-forge.
I'm guessing you're running this on a Pangeo JupyterHub, right? Any idea what the underlying system is?
Thanks for the followup @leouieda. Currently the underlying system of Pangeo is Ubuntu Bionic. But I don't think it's the kernel or the system. I think I might have narrowed it down to an installation of PyAV, which also uses Cython, and may install incompatible libraries. I'll keep you posted if/when I find a fix. Thanks!
In case anyone is wondering, it was actually the pyinterp package that somehow includes something that makes gmt=6.0.0 segmentation fault. I never got to the bottom of the issue, just removed pyinterp from the OOI Pangeo notebook image, and it seems to be working okay. Thanks everyone for the help!
Thank you for posting, Tim! That's a strange bug and probably something we didn't specify well enough in the GMT build.
Hello. I am getting some sort of segmentation-type error when running the points example:
I'm on a Pangeo system and the output of conda list is:
Any thoughts what might be causing this? Thank you for any help you can provide!