Open JervenBolleman opened 10 years ago
The gdb back trace function sequence does not appear consistent which development report is indicative of the binary used read the core with gdb not being the one that created it. Thus can you check to ensure they are the same ?
Sorry, I just realized the coredump got erased. Unfortunatly, we did get a different core dump
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /scratch/uuw_sparql/triples/main/2014_06/virtuoso-t...done.
[New Thread 38148]
[New Thread 38153]
[New Thread 38157]
[New Thread 38155]
[New Thread 38216]
[New Thread 38149]
[New Thread 38150]
[New Thread 38132]
[New Thread 38213]
[New Thread 38217]
[New Thread 38146]
[New Thread 38158]
[New Thread 38154]
[New Thread 38152]
[New Thread 38212]
[New Thread 38151]
[New Thread 38215]
[New Thread 38144]
[New Thread 38147]
[New Thread 38214]
[New Thread 38219]
[New Thread 38218]
[New Thread 38145]
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Core was generated by `virtuoso-t -f +wait +configfile /scratch/uuw_sparql/tmp/virtuoso-config-expasy4'.
Program terminated with signal 11, Segmentation fault.
#0 0x0000000000afb0c9 in mp_box_dv_uname_nchars (mp=0xb8e9e9, buf=0xd3594019780 <Address 0xd3594019780 out of bounds>, buf_len=12120752) at Dkpool.c:556
556 Dkpool.c: No such file or directory.
in Dkpool.c
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.132.el6_5.1.x86_64
(gdb) bt
#0 0x0000000000afb0c9 in mp_box_dv_uname_nchars (mp=0xb8e9e9, buf=0xd3594019780 <Address 0xd3594019780 out of bounds>, buf_len=12120752) at Dkpool.c:556
#1 0x00000000004b460b in ceic_split_registered (ceic=0x19926bc8, rd=0x7f5883cdb5b0, buf=0x100000000, splits=0x1591000000142, n_splits=521, inx=9703) at colins.c:3312
#2 0x00000000004b632b in ceic_no_split (ceic=0x0, buf=0x0, action=0x7f5594002020) at colins.c:3707
#3 0x00000000004ba34a in itc_col_vec_insert (itc=0x7f588c118190, ins=0x7f5844e4f490) at colins.c:4664
#4 0x0000000000756d63 in rd_vec_cast (itc=0x7f5844e4f940, rd=0x1e14da5d00000000, col=0x0, icol=5, ins_mp=0x7f577d164600) at vecins.c:666
#5 0x00000000006c6ddf in table_source_input (ts=0x7f5763007618, inst=0x7f588c1183a0, state=0x7f588003fd90) at sqlrun.c:1869
#6 0x00000000006c7169 in table_source_input (ts=0x480000001000801, inst=0x0, state=0x0) at sqlrun.c:1915
#7 0x00000000006c17b4 in qst_swap_or_get_copy (state=0x28001006c982f, sl=0x7f588003d8b0, v=0x7f56dd96e688) at sqlrun.c:315
#8 0x00000000006cdaa5 in qn_anytime_state (qn=0x0, inst=0x0) at sqlrun.c:3641
#9 0x000000000078bc80 in sqlg_rdf_inf_1 (tb_dfe=0x7f5868000908, ts=0x7f588c119c60, q_head=0x7f56dd96e688, inxop_inx=32598) at rdfinf.c:2382
#10 0x000000000078bcca in sqlg_rdf_inf_1 (tb_dfe=0x0, ts=0x0, q_head=0x7f5844e4f730, inxop_inx=20) at rdfinf.c:2382
#11 0x0000000000458cd3 in aqt_other_aq (aqt=0x458cd3) at aqueue.c:93
#12 0x0000000000b05307 in PrpcSelfSignalInit (addr=0x458cd3 "H\205\300t\rH\213E\310H\211E\360", <incomplete sequence \351\201>) at Dkernel.c:3406
#13 0x0000003c4d0079d1 in ?? ()
#14 0x00007f588c11a700 in ?? ()
#15 0x0000000000000000 in ?? ()
Ok, but this back trace is from another occurrence of a query being executed while executed ?
Does this only occur using the build from the https://github.com/openlink/virtuoso-opensource/commit/fc46405790e4f0aef26be3109ade96cde78face7 tree indicated ?
For this back trace can you try using the "up" to look back through the frames to the "qi" variable and then using the following command to print the query being executed at this point:
print qi->qi_query->qr_text
The second backtrace is without concurrent query being executed. It happened during a plain dataload. I can go up the trace but in the second version I don't see the "qi" variable Should I send the backtrace and binary separately via mail so you can have a look your self?
You would need to provide the Virtuoso binary used also. I imagine the core would be quite large thus you would need to gzip an upload to FTP server or other download location as it would be too large to be by mail i would expect ...
You would need to provide the Virtuoso binary used also. I imagine the core would be quite large thus you would need to gzip an upload to FTP server or other download location as it would be too large to be by mail i would expect ...
Time stamp for virtuoso executable changed, but contents remained same between the segfaulting process and the gdb run.