Open SemanticBeeng opened 6 years ago
I suspect you never called Jep.close().
Called under finally; see second println
My guess is that jep get finally-ized after thread terminated and its use of isValidThread
causes the crash: threadstate_getframe+0x0.
Does something like that make sense?
I'm not sure what you mean, isValidThread
cannot be the cause of the crash since it makes no native calls so it cannot call into libpython where the crash occured.
What is the result of your printlns? Is close() getting called on the same thread as creation. Do you see any of the warnings that are here or here?
If you can include the java frames and native frames from the hs_err_pid file that might be helpful.
Yes, same thread.
First run in sbt
> test-only com.knowscieng.datalayer.AnalysisOrchestrationSpec
spec start run jep.Jep@65da5011 on thread specs2-3
>>>>>>> Extracted feature 100
1
spec end end jep.Jep@65da5011 on thread specs2-3
spec start run jep.Jep@2f95c790 on thread specs2-3
>>>>>>> Extracted feature 100
1
spec end end jep.Jep@2f95c790 on thread specs2-3
[info] AnalysisOrchestrationSpec
[info]
[info] orchestration spec should
[info] + extract some feature
[info] + run time analysis
[info]
[info]
[info] Total for specification AnalysisOrchestrationSpec
[info] Finished in 602 ms
[info] 2 examples, 0 failure, 0 error
[info]
[info] Passed: Total 2, Failed 0, Errors 0, Passed 2
second run
> test-only com.knowscieng.datalayer.AnalysisOrchestrationSpec
[info] [sbt-avro4s] Generating sources from [/development/projects/04_clients/knowscieng.io/foundation/domain/shared/.jvm/src/main/resources/avro]
[info] [sbt-avro4s] Generating sources from [/development/projects/04_clients/knowscieng.io/foundation/data-layer/src/main/resources/avro]
[info] --------------------------------------------------------------
[info] [sbt-avro4s] Found 0 schemas
[info] --------------------------------------------------------------
[info] [sbt-avro4s] Found 0 schemas
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fe44bbf74e0, pid=17270, tid=0x00007fe3d02fe700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libpython3.6m.so+0x19d4e0] threadstate_getframe+0x0
I put another print
before new Jep()
and it is the only thing printed in the second run.
So looks like new Jep()
causes the crash: theory supported by frames below, right? see initializePython
From the PID log file. (see more info after)
--------------- T H R E A D ---------------
Current thread (0x00007f6ed80f3800): JavaThread "Thread-2" daemon [_thread_in_native, id=14291, stack(0x00007f6ee4888000,0x00007f6ee4989000)]
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000018
Registers:
RAX=0x00007f6f602164e0, RBX=0x00007f6ee5a56d18, RCX=0x00007f6ee49a87a8, RDX=0x0000000000000073
RSP=0x00007f6ee49876c8, RBP=0x0000000000000000, RSI=0x0000000000000003, RDI=0x0000000000000000
R8 =0x00007f6ee49a87ab, R9 =0x00007f6ee5a56d4b, R10=0x0000000000000000, R11=0x00007f6f5f7ef5a0
R12=0x0000000000000000, R13=0x00007f6ee4d487e8, R14=0x00007f6ee5a56d18, R15=0x00007f6ed80f3800
RIP=0x00007f6f602164e0, EFLAGS=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
TRAPNO=0x000000000000000e
Top of Stack: (sp=0x00007f6ee49876c8)
0x00007f6ee49876c8: 00007f6f601dea69 0000000000000003
0x00007f6ee49876d8: 00007f6f60203872 000000000000007f
0x00007f6ee49876e8: 0000000000000000 00007f6ed80c3f20
0x00007f6ee49876f8: 00007f6ed80f39f8 00007f6ee5a56d18
0x00007f6ee4987708: 0000000000000000 0000000000000000
0x00007f6ee4987718: 00007f6ee4d487e8 00007f6ee4987830
0x00007f6ee4987728: 00007f6f60203b2a 0000000000000000
0x00007f6ee4987738: 0000000000000000 00007f6ed80f39f8
0x00007f6ee4987748: 00007f6ee4998af9 0000000000000000
0x00007f6ee4987758: 00007f6ee4d487e8 00007f6ed80f3800
0x00007f6ee4987768: 00007f6ee4d52138 00007f6ed80f3800
0x00007f6ee4987778: 00007f6f5e5f605f 00007f6ed80f3800
0x00007f6ee4987788: 00007f6ee4d487e8 00007f6ee4987810
0x00007f6ee4987798: 0000000000000000 00007f6ee4d487e8
0x00007f6ee49877a8: 00007f6ee4987830 00007f6ed80f3800
0x00007f6ee49877b8: 00007f6f49017a34 00007f6ed80f3800
0x00007f6ee49877c8: 00007f6f49017782 00007f6ee49877d0
0x00007f6ee49877d8: 00007f6ee4d487e8 00007f6ee4987830
0x00007f6ee49877e8: 00007f6ee4d4ed00 0000000000000000
0x00007f6ee49877f8: 00007f6ee4d487e8 0000000000000000
0x00007f6ee4987808: 00007f6ee4987830 00007f6ee4987878
0x00007f6ee4987818: 00007f6f490082bd 000000008ac8a9b8
0x00007f6ee4987828: 00007f6f49011538 0000000000000000
0x00007f6ee4987838: 00007f6ee4987838 00007f6ee4d4b8a1
0x00007f6ee4987848: 00007f6ee4987888 00007f6ee4d4ed00
0x00007f6ee4987858: 0000000000000000 00007f6ee4d4b8b0
0x00007f6ee4987868: 00007f6ee4987830 00007f6ee4987888
0x00007f6ee4987878: 00007f6ee49878d0 00007f6f490082bd
0x00007f6ee4987888: 0000000000000000 00007f6ee4987890
0x00007f6ee4987898: 00007f6ee4d51d33 00007f6ee4987910
0x00007f6ee49878a8: 00007f6ee4d51f48 0000000000000000
0x00007f6ee49878b8: 00007f6ee4d51e50 00007f6ee4987888
Instructions: (pc=0x00007f6f602164e0)
0x00007f6f602164c0: d9 7c 24 fe 0f b7 44 24 fe c3 66 0f 1f 44 00 00
0x00007f6f602164d0: 66 89 7c 24 fc d9 6c 24 fc c3 66 0f 1f 44 00 00
0x00007f6f602164e0: 48 8b 47 18 c3 66 66 2e 0f 1f 84 00 00 00 00 00
0x00007f6f602164f0: 41 54 bf 78 00 00 00 45 31 e4 55 53 e8 4f fd eb
Register to memory mapping:
RAX=0x00007f6f602164e0: <offset 0x19d4e0> in /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/libpython3.6m.so at 0x00007f6f60079000
RBX=0x00007f6ee5a56d18 is an unknown value
RCX=0x00007f6ee49a87a8: <offset 0x1f7a8> in /tmp/sbt_5651c2ea/sbt_6005b2b/libjep.so at 0x00007f6ee4989000
RDX=0x0000000000000073 is an unknown value
RSP=0x00007f6ee49876c8 is pointing into the stack for thread: 0x00007f6ed80f3800
RBP=0x0000000000000000 is an unknown value
RSI=0x0000000000000003 is an unknown value
RDI=0x0000000000000000 is an unknown value
R8 =0x00007f6ee49a87ab: <offset 0x1f7ab> in /tmp/sbt_5651c2ea/sbt_6005b2b/libjep.so at 0x00007f6ee4989000
R9 =0x00007f6ee5a56d4b is an unknown value
R10=0x0000000000000000 is an unknown value
R11=0x00007f6f5f7ef5a0: <offset 0x1775a0> in /lib/x86_64-linux-gnu/libc.so.6 at 0x00007f6f5f678000
R12=0x0000000000000000 is an unknown value
R13={method} {0x00007f6ee4d487e8} 'initializePython' '([Ljava/lang/String;)V' in 'jep/Jep'
R14=0x00007f6ee5a56d18 is an unknown value
R15=0x00007f6ed80f3800 is a thread
Stack: [0x00007f6ee4888000,0x00007f6ee4989000], sp=0x00007f6ee49876c8, free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libpython3.6m.so+0x19d4e0] threadstate_getframe+0x0
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j jep.Jep.initializePython([Ljava/lang/String;)V+0
j jep.Jep.access$100([Ljava/lang/String;)V+1
j jep.Jep$TopInterpreter$1.run()V+3
j java.lang.Thread.run()V+11
v ~StubRoutines::call_stub
--------------- P R O C E S S ---------------
Java Threads: ( => current thread )
=>0x00007f6ed80f3800 JavaThread "Thread-2" daemon [_thread_in_native, id=14291, stack(0x00007f6ee4888000,0x00007f6ee4989000)]
0x00007f6ec4006000 JavaThread "specs2.user-4" [_thread_blocked, id=14289, stack(0x00007f6ee4bb4000,0x00007f6ee4cb5000)]
0x00007f6ed80f1000 JavaThread "specs2-4" [_thread_blocked, id=14287, stack(0x00007f6ee60e4000,0x00007f6ee61e5000)]
0x00007f6edc001800 JavaThread "specs2.user-3" [_thread_blocked, id=14286, stack(0x00007f6ee66e8000,0x00007f6ee67e9000)]
0x00007f6ed0004800 JavaThread "specs2.user-2" [_thread_blocked, id=14285, stack(0x00007f6ee64e6000,0x00007f6ee65e7000)]
0x00007f6ed80f0000 JavaThread "specs2.user-1" [_thread_blocked, id=14284, stack(0x00007f6ee65e7000,0x00007f6ee66e8000)]
0x00007f6ed4003800 JavaThread "specs2-3" [_thread_blocked, id=14283, stack(0x00007f6ee61e5000,0x00007f6ee62e6000)]
0x00007f6ecc004800 JavaThread "specs2-2" [_thread_blocked, id=14282, stack(0x00007f6ee5fe3000,0x00007f6ee60e4000)]
0x00007f6efc9f0000 JavaThread "specs2-1" [_thread_blocked, id=14281, stack(0x00007f6ee5de1000,0x00007f6ee5ee2000)]
0x00007f6eec360000 JavaThread "pool-5-thread-7" [_thread_blocked, id=14280, stack(0x00007f6ee5ee2000,0x00007f6ee5fe3000)]
0x00007f6ee8005800 JavaThread "pool-5-thread-6" [_thread_blocked, id=14279, stack(0x00007f6f25189000,0x00007f6f2528a000)]
0x00007f6ee0bd3800 JavaThread "pool-5-thread-5" [_thread_blocked, id=14278, stack(0x00007f6f24f87000,0x00007f6f25088000)]
0x00007f6f59181000 JavaThread "pool-5-thread-4" [_thread_blocked, id=14277, stack(0x00007f6f24a84000,0x00007f6f24b85000)]
0x00007f6f59185800 JavaThread "pool-5-thread-3" [_thread_blocked, id=14276, stack(0x00007f6f24b85000,0x00007f6f24c86000)]
0x00007f6f59185000 JavaThread "pool-5-thread-2" [_thread_blocked, id=14275, stack(0x00007f6f24e86000,0x00007f6f24f87000)]
0x00007f6f593c5000 JavaThread "pool-5-thread-1" [_thread_blocked, id=14274, stack(0x00007f6f25088000,0x00007f6f25189000)]
0x00007f6edc01c800 JavaThread "Thread-1" daemon [_thread_blocked, id=14176, stack(0x00007f6ee5ab5000,0x00007f6ee5bb6000)]
0x00007f6f582c4800 JavaThread "process reaper" daemon [_thread_blocked, id=14130, stack(0x00007f6f240b9000,0x00007f6f240f2000)]
0x00007f6f58846000 JavaThread "process reaper" daemon [_thread_blocked, id=14094, stack(0x00007f6f269be000,0x00007f6f269f7000)]
0x00007f6f581d8800 JavaThread "Service Thread" daemon [_thread_blocked, id=14091, stack(0x00007f6f27621000,0x00007f6f27722000)]
0x00007f6f581cd800 JavaThread "C1 CompilerThread2" daemon [_thread_blocked, id=14090, stack(0x00007f6f27722000,0x00007f6f27823000)]
0x00007f6f581cb800 JavaThread "C2 CompilerThread1" daemon [_thread_in_native, id=14089, stack(0x00007f6f27823000,0x00007f6f27924000)]
0x00007f6f581c9000 JavaThread "C2 CompilerThread0" daemon [_thread_blocked, id=14088, stack(0x00007f6f27924000,0x00007f6f27a25000)]
0x00007f6f581c7000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=14087, stack(0x00007f6f27a25000,0x00007f6f27b26000)]
0x00007f6f581c5800 JavaThread "Surrogate Locker Thread (Concurrent GC)" daemon [_thread_blocked, id=14086, stack(0x00007f6f27b26000,0x00007f6f27c27000)]
0x00007f6f58192000 JavaThread "Finalizer" daemon [_thread_blocked, id=14085, stack(0x00007f6f27eff000,0x00007f6f28000000)]
0x00007f6f5818d800 JavaThread "Reference Handler" daemon [_thread_blocked, id=14084, stack(0x00007f6f38092000,0x00007f6f38193000)]
0x00007f6f58009800 JavaThread "main" [_thread_blocked, id=14077, stack(0x00007f6f606a9000,0x00007f6f607aa000)]
Other Threads:
0x00007f6f58186000 VMThread [stack: 0x00007f6f38193000,0x00007f6f38294000] [id=14083]
0x00007f6f581db800 WatcherThread [stack: 0x00007f6f27520000,0x00007f6f27621000] [id=14092]
VM state:not at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: None
Heap:
par new generation total 306688K, used 212433K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 65% used [0x0000000080000000, 0x000000008ae34430, 0x0000000090a40000)
from space 34048K, 100% used [0x0000000092b80000, 0x0000000094cc0000, 0x0000000094cc0000)
to space 34048K, 0% used [0x0000000090a40000, 0x0000000090a40000, 0x0000000092b80000)
concurrent mark-sweep generation total 183552K, used 124028K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 73155K, capacity 74274K, committed 74516K, reserved 1110016K
class space used 14077K, capacity 14343K, committed 14416K, reserved 1048576K
Card table byte_map: [0x00007f6f5c517000,0x00007f6f5c918000] byte_map_base: 0x00007f6f5c117000
Marking Bits: (CMSBitMap*) 0x00007f6f58061a88
Bits: [0x00007f6f3a532000, 0x00007f6f3bfff000)
Mod Union Table: (CMSBitMap*) 0x00007f6f58061b48
Bits: [0x00007f6f6063d000, 0x00007f6f606a8340)
Polling page: 0x00007f6f607d8000
CodeCache: size=245760Kb used=20929Kb max_used=20939Kb free=224830Kb
bounds [0x00007f6f49000000, 0x00007f6f4a4a0000, 0x00007f6f58000000]
total_blobs=8487 nmethods=8003 adapters=394
compilation: enabled
Compilation events (10 events):
Event: 35.839 Thread 0x00007f6f581cd800 nmethod 9847 0x00007f6f4a490010 code [0x00007f6f4a490180, 0x00007f6f4a4902d0]
Event: 35.840 Thread 0x00007f6f581cd800 9859 1 org.specs2.main.Arguments::execute (5 bytes)
Event: 35.840 Thread 0x00007f6f581cd800 nmethod 9859 0x00007f6f4a489050 code [0x00007f6f4a4891a0, 0x00007f6f4a4892b0]
Event: 35.840 Thread 0x00007f6f581cd800 9844 1 org.specs2.control.eff.UnionAppendL::t (5 bytes)
Event: 35.840 Thread 0x00007f6f581cd800 nmethod 9844 0x00007f6f4a488d90 code [0x00007f6f4a488ee0, 0x00007f6f4a488ff0]
Event: 35.843 Thread 0x00007f6f581c9000 nmethod 9858 0x00007f6f4a499f10 code [0x00007f6f4a49a160, 0x00007f6f4a49abc8]
Event: 35.852 Thread 0x00007f6f581cd800 9860 3 sbt.classpath.NativeCopyLoader$$anonfun$3::apply (12 bytes)
Event: 35.853 Thread 0x00007f6f581cd800 nmethod 9860 0x00007f6f4a499590 code [0x00007f6f4a499740, 0x00007f6f4a499cf8]
Event: 35.853 Thread 0x00007f6f581cd800 9861 3 sbt.classpath.NativeCopyLoader$$anonfun$3::apply (34 bytes)
Event: 35.853 Thread 0x00007f6f581cd800 nmethod 9861 0x00007f6f4a499010 code [0x00007f6f4a4991a0, 0x00007f6f4a499478]
GC Heap History (10 events):
Event: 9.105 GC heap before
{Heap before GC invocations=4 (full 3):
par new generation total 306688K, used 284122K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 100% used [0x0000000080000000, 0x0000000090a40000, 0x0000000090a40000)
from space 34048K, 33% used [0x0000000090a40000, 0x0000000091576be8, 0x0000000092b80000)
to space 34048K, 0% used [0x0000000092b80000, 0x0000000092b80000, 0x0000000094cc0000)
concurrent mark-sweep generation total 183552K, used 27416K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 50077K, capacity 50714K, committed 50836K, reserved 1089536K
class space used 9672K, capacity 9911K, committed 9936K, reserved 1048576K
Event: 9.167 GC heap after
Heap after GC invocations=5 (full 3):
par new generation total 306688K, used 34048K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 0% used [0x0000000080000000, 0x0000000080000000, 0x0000000090a40000)
from space 34048K, 100% used [0x0000000092b80000, 0x0000000094cc0000, 0x0000000094cc0000)
to space 34048K, 0% used [0x0000000090a40000, 0x0000000090a40000, 0x0000000092b80000)
concurrent mark-sweep generation total 183552K, used 37869K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 50077K, capacity 50714K, committed 50836K, reserved 1089536K
class space used 9672K, capacity 9911K, committed 9936K, reserved 1048576K
}
Event: 10.393 GC heap before
{Heap before GC invocations=5 (full 3):
par new generation total 306688K, used 306688K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 100% used [0x0000000080000000, 0x0000000090a40000, 0x0000000090a40000)
from space 34048K, 100% used [0x0000000092b80000, 0x0000000094cc0000, 0x0000000094cc0000)
to space 34048K, 0% used [0x0000000090a40000, 0x0000000090a40000, 0x0000000092b80000)
concurrent mark-sweep generation total 183552K, used 37869K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 50934K, capacity 51620K, committed 51860K, reserved 1091584K
class space used 9863K, capacity 10104K, committed 10192K, reserved 1048576K
Event: 10.467 GC heap after
Heap after GC invocations=6 (full 3):
par new generation total 306688K, used 26914K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 0% used [0x0000000080000000, 0x0000000080000000, 0x0000000090a40000)
from space 34048K, 79% used [0x0000000090a40000, 0x0000000092488b60, 0x0000000092b80000)
to space 34048K, 0% used [0x0000000092b80000, 0x0000000092b80000, 0x0000000094cc0000)
concurrent mark-sweep generation total 183552K, used 72776K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 50934K, capacity 51620K, committed 51860K, reserved 1091584K
class space used 9863K, capacity 10104K, committed 10192K, reserved 1048576K
}
Event: 11.206 GC heap before
{Heap before GC invocations=6 (full 4):
par new generation total 306688K, used 299554K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 100% used [0x0000000080000000, 0x0000000090a40000, 0x0000000090a40000)
from space 34048K, 79% used [0x0000000090a40000, 0x0000000092488b60, 0x0000000092b80000)
to space 34048K, 0% used [0x0000000092b80000, 0x0000000092b80000, 0x0000000094cc0000)
concurrent mark-sweep generation total 183552K, used 72776K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 51368K, capacity 52036K, committed 52244K, reserved 1091584K
class space used 9987K, capacity 10200K, committed 10320K, reserved 1048576K
Event: 11.222 GC heap after
Heap after GC invocations=7 (full 4):
par new generation total 306688K, used 22199K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 0% used [0x0000000080000000, 0x0000000080000000, 0x0000000090a40000)
from space 34048K, 65% used [0x0000000092b80000, 0x000000009412dca8, 0x0000000094cc0000)
to space 34048K, 0% used [0x0000000090a40000, 0x0000000090a40000, 0x0000000092b80000)
concurrent mark-sweep generation total 183552K, used 72776K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 51368K, capacity 52036K, committed 52244K, reserved 1091584K
class space used 9987K, capacity 10200K, committed 10320K, reserved 1048576K
}
Event: 16.962 GC heap before
{Heap before GC invocations=7 (full 4):
par new generation total 306688K, used 294839K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 100% used [0x0000000080000000, 0x0000000090a40000, 0x0000000090a40000)
from space 34048K, 65% used [0x0000000092b80000, 0x000000009412dca8, 0x0000000094cc0000)
to space 34048K, 0% used [0x0000000090a40000, 0x0000000090a40000, 0x0000000092b80000)
concurrent mark-sweep generation total 183552K, used 67233K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 54423K, capacity 55214K, committed 55444K, reserved 1093632K
class space used 10618K, capacity 10867K, committed 10960K, reserved 1048576K
Event: 17.024 GC heap after
Heap after GC invocations=8 (full 4):
par new generation total 306688K, used 34048K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 0% used [0x0000000080000000, 0x0000000080000000, 0x0000000090a40000)
from space 34048K, 100% used [0x0000000090a40000, 0x0000000092b80000, 0x0000000092b80000)
to space 34048K, 0% used [0x0000000092b80000, 0x0000000092b80000, 0x0000000094cc0000)
concurrent mark-sweep generation total 183552K, used 89961K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 54423K, capacity 55214K, committed 55444K, reserved 1093632K
class space used 10618K, capacity 10867K, committed 10960K, reserved 1048576K
}
Event: 34.376 GC heap before
{Heap before GC invocations=8 (full 4):
par new generation total 306688K, used 306688K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 100% used [0x0000000080000000, 0x0000000090a40000, 0x0000000090a40000)
from space 34048K, 100% used [0x0000000090a40000, 0x0000000092b80000, 0x0000000092b80000)
to space 34048K, 0% used [0x0000000092b80000, 0x0000000092b80000, 0x0000000094cc0000)
concurrent mark-sweep generation total 183552K, used 89961K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 66873K, capacity 67810K, committed 67988K, reserved 1103872K
class space used 12991K, capacity 13255K, committed 13264K, reserved 1048576K
Event: 34.448 GC heap after
Heap after GC invocations=9 (full 4):
par new generation total 306688K, used 34048K [0x0000000080000000, 0x0000000094cc0000, 0x0000000094cc0000)
eden space 272640K, 0% used [0x0000000080000000, 0x0000000080000000, 0x0000000090a40000)
from space 34048K, 100% used [0x0000000092b80000, 0x0000000094cc0000, 0x0000000094cc0000)
to space 34048K, 0% used [0x0000000090a40000, 0x0000000090a40000, 0x0000000092b80000)
concurrent mark-sweep generation total 183552K, used 124028K [0x0000000094cc0000, 0x00000000a0000000, 0x0000000100000000)
Metaspace used 66873K, capacity 67810K, committed 67988K, reserved 1103872K
class space used 12991K, capacity 13255K, committed 13264K, reserved 1048576K
}
Deoptimization events (10 events):
Event: 34.585 Thread 0x00007f6eec360000 Uncommon trap: reason=array_check action=maybe_recompile pc=0x00007f6f4a231ef4 method=scala.collection.mutable.ArrayBuilder$ofRef.$plus$eq(Ljava/lang/Object;)Lscala/collection/mutable/ArrayBuilder$ofRef; @ 19
Event: 34.648 Thread 0x00007f6eec360000 Uncommon trap: reason=class_check action=maybe_recompile pc=0x00007f6f4a0fabe0 method=scala.collection.mutable.HashTable$class.scala$collection$mutable$HashTable$$addEntry0(Lscala/collection/mutable/HashTable;Lscala/collection/mutable/HashEntry;I)V @ 62
Event: 34.648 Thread 0x00007f6eec360000 Uncommon trap: reason=class_check action=maybe_recompile pc=0x00007f6f49653950 method=scala.collection.mutable.HashTable$class.resize(Lscala/collection/mutable/HashTable;I)V @ 1
Event: 34.653 Thread 0x00007f6eec360000 Uncommon trap: reason=class_check action=maybe_recompile pc=0x00007f6f4a15b064 method=java.io.BufferedInputStream.read([BII)I @ 101
Event: 34.789 Thread 0x00007f6ee8005800 Uncommon trap: reason=bimorphic action=maybe_recompile pc=0x00007f6f4957db08 method=scala.collection.generic.GenericCompanion.apply(Lscala/collection/Seq;)Lscala/collection/GenTraversable; @ 23
Event: 34.790 Thread 0x00007f6ee8005800 Uncommon trap: reason=bimorphic action=maybe_recompile pc=0x00007f6f4957db08 method=scala.collection.generic.GenericCompanion.apply(Lscala/collection/Seq;)Lscala/collection/GenTraversable; @ 23
Event: 34.790 Thread 0x00007f6ee8005800 Uncommon trap: reason=bimorphic action=maybe_recompile pc=0x00007f6f4957db08 method=scala.collection.generic.GenericCompanion.apply(Lscala/collection/Seq;)Lscala/collection/GenTraversable; @ 23
Event: 34.791 Thread 0x00007f6ee8005800 Uncommon trap: reason=bimorphic action=maybe_recompile pc=0x00007f6f4957db08 method=scala.collection.generic.GenericCompanion.apply(Lscala/collection/Seq;)Lscala/collection/GenTraversable; @ 23
Event: 35.308 Thread 0x00007f6ee8005800 Uncommon trap: reason=unstable_if action=reinterpret pc=0x00007f6f491dbcb4 method=java.lang.AbstractStringBuilder.append(Ljava/lang/String;)Ljava/lang/AbstractStringBuilder; @ 1
Event: 35.310 Thread 0x00007f6ee8005800 Uncommon trap: reason=bimorphic action=maybe_recompile pc=0x00007f6f4a230b78 method=java.util.Hashtable.get(Ljava/lang/Object;)Ljava/lang/Object; @ 6
Internal exceptions (10 events):
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x000000008398db78) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x000000008398f050) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x000000008398f448) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x00000000839908d0) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x0000000083990cc8) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x00000000839921a0) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x0000000083992598) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.766 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x0000000083993a70) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 34.767 Thread 0x00007f6f59181000 Exception <a 'java/security/PrivilegedActionException'> (0x0000000083993e68) thrown at [/HUDSON/workspace/8-2-build-linux-amd64/jdk8u131/8869/hotspot/src/share/vm/prims/jvm.cpp, line 1390]
Event: 35.308 Thread 0x00007f6ee8005800 Implicit null exception at 0x00007f6f491db8f6 to 0x00007f6f491dbc9d
Events (10 events):
Event: 35.853 Executing VM operation: RevokeBias done
Event: 35.857 loading class jep/JepConfig
Event: 35.857 loading class jep/JepConfig done
Event: 35.857 loading class jep/Jep$TopInterpreter
Event: 35.857 loading class jep/Jep$TopInterpreter done
Event: 35.858 loading class jep/Jep$TopInterpreter$1
Event: 35.858 loading class jep/Jep$TopInterpreter$1 done
Event: 35.859 Thread 0x00007f6ed80f3800 Thread added: 0x00007f6ed80f3800
Event: 35.859 Executing VM operation: RevokeBias
Event: 35.860 Executing VM operation: RevokeBias done
Dynamic libraries:
00400000-00401000 r-xp 00000000 fc:00 2371929 /usr/lib/jvm/java-8-oracle/bin/java
00600000-00601000 rw-p 00000000 fc:00 2371929 /usr/lib/jvm/java-8-oracle/bin/java
01403000-01424000 rw-p 00000000 00:00 0 [heap]
80000000-a0000000 rw-p 00000000 00:00 0
a0000000-100000000 ---p 00000000 00:00 0
100000000-100e14000 rw-p 00000000 00:00 0
100e14000-140000000 ---p 00000000 00:00 0
7f6ec4000000-7f6ec4051000 rw-p 00000000 00:00 0
7f6ec4051000-7f6ec8000000 ---p 00000000 00:00 0
7f6ec8000000-7f6ec80f5000 rw-p 00000000 00:00 0
7f6ec80f5000-7f6ecc000000 ---p 00000000 00:00 0
7f6ecc000000-7f6ecc05b000 rw-p 00000000 00:00 0
7f6ecc05b000-7f6ed0000000 ---p 00000000 00:00 0
7f6ed0000000-7f6ed0021000 rw-p 00000000 00:00 0
7f6ed0021000-7f6ed4000000 ---p 00000000 00:00 0
7f6ed4000000-7f6ed4026000 rw-p 00000000 00:00 0
7f6ed4026000-7f6ed8000000 ---p 00000000 00:00 0
7f6ed8000000-7f6ed8219000 rw-p 00000000 00:00 0
7f6ed8219000-7f6edc000000 ---p 00000000 00:00 0
7f6edc000000-7f6edc0b3000 rw-p 00000000 00:00 0
7f6edc0b3000-7f6ee0000000 ---p 00000000 00:00 0
7f6ee0000000-7f6ee109b000 rw-p 00000000 00:00 0
7f6ee109b000-7f6ee4000000 ---p 00000000 00:00 0
7f6ee4888000-7f6ee488b000 ---p 00000000 00:00 0
7f6ee488b000-7f6ee4989000 rw-p 00000000 00:00 0
7f6ee4989000-7f6ee49b0000 r-xp 00000000 fc:00 2230697 /tmp/sbt_5651c2ea/sbt_6005b2b/libjep.so
7f6ee49b0000-7f6ee4bb0000 ---p 00027000 fc:00 2230697 /tmp/sbt_5651c2ea/sbt_6005b2b/libjep.so
7f6ee4bb0000-7f6ee4bb1000 r--p 00027000 fc:00 2230697 /tmp/sbt_5651c2ea/sbt_6005b2b/libjep.so
7f6ee4bb1000-7f6ee4bb4000 rw-p 00028000 fc:00 2230697 /tmp/sbt_5651c2ea/sbt_6005b2b/libjep.so
7f6ee4bb4000-7f6ee4bb7000 ---p 00000000 00:00 0
7f6ee4bb7000-7f6ee4d75000 rw-p 00000000 00:00 0
7f6ee4d75000-7f6ee4eb5000 ---p 00000000 00:00 0
7f6ee4eb5000-7f6ee50b5000 rw-p 00000000 00:00 0
7f6ee50b5000-7f6ee52b5000 rw-p 00000000 00:00 0
7f6ee52b5000-7f6ee52d0000 r--s 001d3000 fc:00 2370570 /usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar
7f6ee52d0000-7f6ee52d5000 r--s 002f9000 fc:00 2370436 /usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar
7f6ee52d5000-7f6ee52df000 r--s 00116000 fc:00 2370566 /usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar
7f6ee52df000-7f6ee52e1000 r--s 00009000 fc:00 2370571 /usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar
7f6ee52e1000-7f6ee52e5000 r--s 0003a000 fc:00 2370567 /usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar
7f6ee52e5000-7f6ee52e8000 r--s 0001a000 fc:00 2370554 /usr/lib/jvm/java-8-oracle/jre/lib/jce.jar
7f6ee52e8000-7f6ee52ea000 r--s 00001000 fc:00 2370565 /usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar
7f6ee52ea000-7f6ee54ea000 rw-p 00000000 00:00 0
7f6ee54ea000-7f6ee56aa000 rw-p 00000000 00:00 0
7f6ee56aa000-7f6ee56ac000 r-xp 00000000 08:11 61874041 /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7f6ee56ac000-7f6ee58ac000 ---p 00002000 08:11 61874041 /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7f6ee58ac000-7f6ee58ae000 rw-p 00002000 08:11 61874041 /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so
7f6ee58ae000-7f6ee5a2e000 rw-p 00000000 00:00 0
7f6ee5a2e000-7f6ee5a35000 r--s 00000000 fc:00 1852235 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache
7f6ee5a35000-7f6ee5ab5000 rw-p 00000000 00:00 0
7f6ee5ab5000-7f6ee5ab8000 ---p 00000000 00:00 0
7f6ee5ab8000-7f6ee5bb6000 rw-p 00000000 00:00 0
7f6ee5bb6000-7f6ee5bdd000 r-xp 00000000 fc:00 2230694 /tmp/sbt_5651c2ea/sbt_afa6a246/libjep.so (deleted)
7f6ee5bdd000-7f6ee5ddd000 ---p 00027000 fc:00 2230694 /tmp/sbt_5651c2ea/sbt_afa6a246/libjep.so (deleted)
7f6ee5ddd000-7f6ee5dde000 r--p 00027000 fc:00 2230694 /tmp/sbt_5651c2ea/sbt_afa6a246/libjep.so (deleted)
7f6ee5dde000-7f6ee5de1000 rw-p 00028000 fc:00 2230694 /tmp/sbt_5651c2ea/sbt_afa6a246/libjep.so (deleted)
7f6ee5de1000-7f6ee5de4000 ---p 00000000 00:00 0
7f6ee5de4000-7f6ee5ee2000 rw-p 00000000 00:00 0
7f6ee5ee2000-7f6ee5ee5000 ---p 00000000 00:00 0
Interestingly all is well when specs are run from Intellij (not sbt).
Now, if I use rpy2
from the python file, say I add these lines
from rpy2.robjects import r, pandas2ri
import rpy2.rinterface as ri
pandas2ri.activate()
I get this even in Intellij: note type_dealloc
Testing started at 12:21 AM ...
orchestration spec should
HERE
spec start run jep.Jep@69a0219a on thread specs2-4
1
spec end end jep.Jep@69a0219a on thread specs2-4
extract some feature
HERE
spec start run jep.Jep@20a7a65d on thread specs2-4
spec end end jep.Jep@20a7a65d on thread specs2-4
>>>>>>> Extracted feature 100
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007faad301aecb, pid=18364, tid=0x00007faa6bbfb700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libpython3.6m.so+0xe1ecb] type_dealloc+0x2b
#
...
Warning: the rpy2 object we are trying to release has no associated capsule.
Warning: the rpy2 object we are trying to release has no associated capsule.
Restoring an earlier exception.
Error:Trying to release object ID 140369600184592 while not preserved
Warning: the rpy2 object we are trying to release has no associated capsule.
Restoring an earlier exception.
Error:Trying to release object ID 140369627043128 while not preserved
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
Note the "Error:Trying to release object ID" above !
In the PID log I get this
--------------- T H R E A D ---------------
Current thread (0x00007faa5c002000): JavaThread "specs2-4" [_thread_in_native, id=18392, stack(0x00007faa6bafb000,0x00007faa6bbfc000)]
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000
Registers:
RAX=0x0000000000000000, RBX=0x00007faa37eb63a8, RCX=0x00000000000000ff, RDX=0x0000000000000000
RSP=0x00007faa6bbf9160, RBP=0x00007faa6a686c80, RSI=0x00007faa6bbf9170, RDI=0x00007faa6a686c80
R8 =0x00007faa6bbf91e0, R9 =0x03774b04cdb1ba22, R10=0x00007faa37eb63a8, R11=0x00007faa59e6e8a8
R12=0x00007faad33fb910, R13=0x03774b04cdb1ba22, R14=0x00007faa6bbf91d0, R15=0x00007faa6bbf91e0
RIP=0x00007faad301aecb, EFLAGS=0x0000000000010286, CSGSFS=0x0000000000000033, ERR=0x0000000000000006
TRAPNO=0x000000000000000e
Top of Stack: (sp=0x00007faa6bbf9160)
0x00007faa6bbf9160: 00007faa6b31eef0 00007faa00000000
0x00007faa6bbf9170: 00007faa6b194d28 00007faa6b194d10
0x00007faa6bbf9180: 00007faa39b60740 00007faa37eb63a8
0x00007faa6bbf9190: 00007faa37eb63a8 00007faa6ad48830
0x00007faa6bbf91a0: 00007faad33fb910 03774b04cdb1ba22
0x00007faa6bbf91b0: 00007faa6bbf91d0 00007faad2ff6c07
0x00007faa6bbf91c0: 0000000000000000 00007faa6bbf91e0
0x00007faa6bbf91d0: 00007faa59e6eff0 0000000000000000
0x00007faa6bbf91e0: 0000000000000022 00007faa6bbf91e0
0x00007faa6bbf91f0: 00007faa59c78370 00007faad33fb910
0x00007faa6bbf9200: 00007faa37eb63a8 00007faa6bbf9230
0x00007faa6bbf9210: 00007faa6bbf9240 00007faa6bbf9250
0x00007faa6bbf9220: 0000000000000055 00007faad300663c
0x00007faa6bbf9230: 000000000000004e 00007faad2ff25f8
0x00007faa6bbf9240: 00007faa6ad48830 00007faa3b609ae8
0x00007faa6bbf9250: 00007faa6a686c80 00007faa3b609ae8
0x00007faa6bbf9260: 00007faa5a5e0ed0 00007faad3475838
0x00007faa6bbf9270: 00007faa3b6a8f98 00007faa3c521888
0x00007faa6bbf9280: 00007faa6b300900 00007faad30c1a52
0x00007faa6bbf9290: 00007faa5c002000 00007faa6b1a9350
0x00007faa6bbf92a0: 000000000000006d 00007faa36fea288
0x00007faa6bbf92b0: 000000000000006f 00007faa5c0021f8
0x00007faa6bbf92c0: 00007faa3bc5c8a0 00007faa37603e70
0x00007faa6bbf92d0: 00007faa39f8d278 00007faad2fab56f
0x00007faa6bbf92e0: 00007faa5c002000 00007faa59c78370
0x00007faa6bbf92f0: 00007faa5a5e0ed0 00007faa5c0021f8
0x00007faa6bbf9300: 00007faaa0474b70 00007faa6bbf93f0
0x00007faa6bbf9310: 00007faa5c002000 00007faad30d5bf7
0x00007faa6bbf9320: 00007faaa0474b70 00007faa59c6d830
0x00007faa6bbf9330: 00007faa3bc4fea0 00007faa6b7de8b8
0x00007faa6bbf9340: 00007faaa0474b70 00007faa6bbf93c0
0x00007faa6bbf9350: 0000000000000000 00007faabcbeaa34
Instructions: (pc=0x00007faad301aecb)
0x00007faad301aeab: 48 83 ec 30 48 8b 47 f8 48 8b 57 e8 48 8d 74 24
0x00007faad301aebb: 10 83 e0 01 48 83 c8 fc 48 89 47 f8 48 8b 47 f0
0x00007faad301aecb: 48 89 10 48 8b 57 e8 48 89 42 08 48 8d 54 24 20
0x00007faad301aedb: 48 c7 47 e8 00 00 00 00 48 89 e7 e8 95 ee f7 ff
Register to memory mapping:
RAX=0x0000000000000000 is an unknown value
RBX=0x00007faa37eb63a8 is an unknown value
RCX=0x00000000000000ff is an unknown value
RDX=0x0000000000000000 is an unknown value
RSP=0x00007faa6bbf9160 is pointing into the stack for thread: 0x00007faa5c002000
RBP=0x00007faa6a686c80: <offset 0x21dc80> in /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-x86_64-linux-gnu.so at 0x00007faa6a469000
RSI=0x00007faa6bbf9170 is pointing into the stack for thread: 0x00007faa5c002000
RDI=0x00007faa6a686c80: <offset 0x21dc80> in /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-x86_64-linux-gnu.so at 0x00007faa6a469000
R8 =0x00007faa6bbf91e0 is pointing into the stack for thread: 0x00007faa5c002000
R9 =0x03774b04cdb1ba22 is an unknown value
R10=0x00007faa37eb63a8 is an unknown value
R11=0x00007faa59e6e8a8 is an unknown value
R12=0x00007faad33fb910: _Py_NoneStruct+0 in /development/bin/python/conda3/envs/knowscieng-dev-ds/lib/libpython3.6m.so at 0x00007faad2f39000
R13=0x03774b04cdb1ba22 is an unknown value
R14=0x00007faa6bbf91d0 is pointing into the stack for thread: 0x00007faa5c002000
R15=0x00007faa6bbf91e0 is pointing into the stack for thread: 0x00007faa5c002000
Stack: [0x00007faa6bafb000,0x00007faa6bbfc000], sp=0x00007faa6bbf9160, free space=1016k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libpython3.6m.so+0xe1ecb] type_dealloc+0x2b
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j jep.Jep.close(J)V+0
j jep.Jep.close()V+139
j com.knowscieng.datalayer.callContext$class.foreach(Lcom/knowscieng/datalayer/callContext;Lscala/Function1;Lorg/specs2/execute/AsResult;)Lorg/specs2/execute/Result;+172
j com.knowscieng.datalayer.AnalysisOrchestrationSpec.foreach(Lscala/Function1;Lorg/specs2/execute/AsResult;)Lorg/specs2/execute/Result;+3
j org.specs2.specification.ForEach$$anon$7.asResult(Lscala/Function0;)Lorg/specs2/execute/Result;+17
j org.specs2.execute.AsResult$.apply(Lscala/Function0;Lorg/specs2/execute/AsResult;)Lorg/specs2/execute/Result;+11
j org.specs2.specification.core.AsExecution$$anon$1$$anonfun$execute$1.apply()Lorg/specs2/execute/Result;+14
j org.specs2.specification.core.AsExecution$$anon$1$$anonfun$execute$1.apply()Ljava/lang/Object;+1
j org.specs2.execute.ResultExecution$class.execute(Lorg/specs2/execute/ResultExecution;Lscala/Function0;)Lorg/specs2/execute/Result;+1
j org.specs2.execute.ResultExecution$.execute(Lscala/Function0;)Lorg/specs2/execute/Result;+2
j org.specs2.execute.Result$$anon$11.asResult(Lscala/Function0;)Lorg/specs2/execute/Result;+4
j org.specs2.execute.AsResult$.apply(Lscala/Function0;Lorg/specs2/execute/AsResult;)Lorg/specs2/execute/Result;+11
j org.specs2.execute.AsResult$$anonfun$safely$1.apply()Lorg/specs2/execute/Result;+11
j org.specs2.execute.AsResult$$anonfun$safely$1.apply()Ljava/lang/Object;+1
So I'd say something is wrong with how the close()
cleanup is done.
Thoughts please?
Many thanks in advance for reading this far.
One more: to shed light on internal memory management (which seems to be causing this)
if I initialize embedded R explicitly in Python like this :
def extract_feature1(param1):
print(f">>>>>>> Extracted feature {param1}")
x = Integer(5)
y = Integer(51)
a = AL()
from rpy2.robjects import r, pandas2ri
import rpy2.rinterface as ri
pandas2ri.activate()
ri.initr()
ri.endr(0)
return 1
Then from Intellij I get
/usr/lib/jvm/java-8-oracle/bin/java -Djava.library.path=/development/bin/python/conda3/envs/dev-ds/lib/python3.6/site-packages/jep -javaagent:/opt/Intellij/idea-IU-172.3757.52/lib/idea_rt.jar=41628:/opt/Intellij/idea-IU-172.3757.52/bin -Dfile.encoding=UTF-8 -classpath /home/nickdsc/.IntelliJIdea2017.2/config/plugins/Scala/lib/scala-plugin-runners.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-1.1.1.3.jar:/home/nickdsc/.IntelliJIdea2017.2/system/r_skeletons/725233088 org.jetbrains.plugins.scala.testingSupport.specs2.JavaSpecs2Runner -s com.knowscieng.datalayer.AnalysisOrchestrationSpec -C org.jetbrains.plugins.scala.testingSupport.specs2.JavaSpecs2Notifier -showProgressMessages true
Testing started at 12:46 AM ...
orchestration spec should
HERE
spec start run jep.Jep@771c1016 on thread specs2-4
1
spec end end jep.Jep@771c1016 on thread specs2-4
extract some feature
HERE
spec start run jep.Jep@65584224 on thread specs2-2
R is already initialized
Process finished with exit code 1
So the second run does not happen because ... the second jep
instance somehow ... sees the state (embedded R engine) of the first one ?
This all seems to point to improper state management and cleanup in jep
or the JNI things it is calling in Cython
.
Alternative experiment to troubleshoot using insights from https://github.com/ninia/jep/issues/28#issuecomment-150028479
If I use a python script like this
import numpy
def init(something):
numpy.ndarray([1]).any()
return 1
then same thing happens in sbt: crashes at second run
# Problematic frame:
# C [libpython3.6m.so+0x19d4e0] threadstate_getframe+0x0
If I use a combination of the above then I get this from inside Intellij: "mpd_setminalloc: ignoring request "
HERE
spec start run jep.Jep@5f5614da on thread specs2-3
1
spec end end jep.Jep@5f5614da on thread specs2-3
extract some feature
HERE
spec start run jep.Jep@56f892fe on thread specs2-1
/feedstock_root/build_artefacts/python_1522365474040/work/Python-3.6.5/Modules/_decimal/libmpdec/context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fc53ecb1a38, pid=24023, tid=0x00007fc53fefe700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [multiarray.cpython-36m-x86_64-linux-gnu.so+0x13ea38] PyArray_Item_INCREF+0xe8
Same JVM and settings were used in all cases.
Did some research into potentially related issues
https://github.com/ninia/jep/issues/28
"ep3.4.1 on Mac in a Scala environment, with scipy. If I try to initialize multiple jep instances and import scipy, I get an error the second time."
https://github.com/ninia/jep/issues/28#issuecomment-136917996
" This is probably a scipy issue and not a Jep issue. The project I work on uses scipy but not extensively (uses numpy much more),"
"Jep can't really do much about this kind of thing if the CPython code is messing up internal function pointers, either at python sub-interpreter shutdown or the start of another sub-interpreter, that's in the specific extension's code, not Jep"
In my case, likely rpy2
does this sort of thing.
https://github.com/ninia/jep/issues/28#issuecomment-137600583
"was thinking making independent instances of Jep per thread protects the variables and namespaces of a script but obviously this dis not seem to be the case. The way I got around this problem was by making synchronized calls to jep.invoke"
https://github.com/ninia/jep/issues/28#issuecomment-150028479
"The root of the problem is in the numpy c code where it imports the mehods module into c and save methods to a static variables"
https://github.com/ninia/jep/wiki/How-Jep-Works#sandboxed-interpreters
"a change to the imported modules, global variables, etc in one interpreter will not be reflected in other sub-interpreters. However, this rule does not apply to CPython extensions. There is no way to strictly enforce a CPython extension is implemented in a way that supports sandboxing" "Jep 3.6 added the concept of shared modules. Shared modules intentionally step outside of the sandboxed sub-intepreters to share a module across sub-interpreters. This can be used to workaround issues with CPython extensions. It can also potentially be used to share Python modules and their state with other sub-interpreters."
Is this on the right direction ?
Introduced use of shared modules and this has improved stability (see https://groups.google.com/forum/#!topic/jep-project/KkDgwpIUayQ by Ben)
val config = new JepConfig().addSharedModules("numpy","rpy2")
def foreach[R: AsResult](f: Jep => R): Result = {
println("Initializing Jep ...")
implicit val jep: Jep = new Jep(config)
Now, a combination of the above involving numpy
and rpy2
+ R code execution does not cause SIGSEGV in Intellij.
In sbt interactive session still happens on the second run.
# SIGSEGV (0xb) at pc=0x00007fc931804580, pid=22078, tid=0x00007fc869d9b700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libpython3.6m.so+0x19c580] threadstate_getframe+0x0
#
When you start mixing in numpy and rpy2 it is not surprising that you can create conditions that crash. Jep is built off of python sub-interpreters which are a rarely used feature of cpython so extension writers often write code that is not compatible with sub-interpreters. There is not much we can do about this from the jep, we have open issues with numpy but it is fairly complex to fix them and there may be performance side affects so there is no activity to fix them. Our current workaround is the introduction of shared modules and in the next release of jep(3.8) I have added an option to use jep without sub-interpreters which should also help alleviate this problem but if you get the configuration wrong with native python extensions it can crash.
What was most interesting in the original post was that you managed to crash without using much python at all. I don't know how sbt works, but I think you were on the right track when you mentioned sbt keeping the python binaries around. I suspect that sbt is using the same process for both runs, however it attempted to unload the Jep class and reload it which causes it to reinitialize the python interpreter. So the first question is if you can verify this? Can you print out the process ID or something to verify that both runs are in the same process?
In theory Jep attempts to support this use case, when the first jep class is unloaded the JVM should call JNI_OnUnload, which calls pyembed_shutdown, which should completely reset the python state. However I have never managed to get the JVM to actually unload in my testing. I suspect that the JVM cannot unload the Jep class because the TopInterpreter Thread is still running. Looking at your hs_err_pid, I suspect that 0x00007f6edc01c800 JavaThread "Thread-1" daemon
is actually the first jep TopInterpreter still running and 0x00007f6ed80f3800 JavaThread "Thread-2" daemon
is the second TopInterpreter which crashes when it tries to reinitialize python. Although looking at the initialization code, this check should actually prevent a crash during initialization, so I'm still not really sure exactly what is going on.
I suspect this problem may be resolved if you use forked JVMs in your SBT configuration, reading some documentation it looks like the Jep background thread is exactly the type of use case that needs forking. It would still be good if we could fix this within Jep so please post any other information you find, but it would not be trivial if we really do need to handle jep class unloading so forking the JVM may be the most practical option.
" managed to crash without using much python " - I believe it was from import pandas2ri
At some point I managed to avoid crash by simply commenting that out.
Until I started using rpy2
in shared module could not get rid of SIGSEGV.
And after I used them I never got a SIGSEGV in Intellij.
It would be great of @lgautier had a look at this since he is master of running R embedded (so he might help with running Python embedded).
side note: At some point I had a script using rpy2
and failing to convert a pandas
frame to R just because pandas
was not loaded as shared module also:
r_dataframe = pandas2ri.py2ri(pdf)
" current workaround is the introduction of shared modules and in the next release of jep(3.8) I have added an option to use jep without sub-interpreters "
What is the ETA please?
"the right track when you mentioned sbt keeping the python binaries around"
Wip investigating with forking but no joy yet. Seems there is no way to get PID from java before java 9.
What is the ETA please?
The code to run without subinterpreters is already in the dev_3.8 branch if you want to try it out, see here. The official release of 3.8 will probably happen sometime around June or July.
Forking of tests in sbt project works and ~solves~ goes around the issue of SIGSEGV in second run in interactive sbt session !
Run #1
Initializing Jep in PID=22406 ...
Start running spec in Jep=jep.Jep@1a428090 / thread=specs2-2
...
Finish running spec in Jep=jep.Jep@1a428090 / thread=specs2-2
[info] + initialize module correctly
Initializing Jep in PID=22406 ...
Start running spec in Jep=jep.Jep@499a82cf / thread=specs2-1
...
Finish running spec in Jep=jep.Jep@499a82cf / thread=specs2-1
/development/bin/python/conda3/envs/dev-ds/lib/python3.6/site-packages/rpy2/rinterface/__init__.py:145: RRuntimeWarning:
Attaching package: ‘dplyr’
...
warnings.warn(x, RRuntimeWarning)
/development/bin/python/conda3/envs/dev-ds/lib/python3.6/site-packages/rpy2/rinterface/__init__.py:145: RRuntimeWarning: The following objects are masked from ‘package:base’:
[info] + extract some feature
Initializing Jep in PID=22406 ...
Start running spec in Jep=jep.Jep@21885754 / thread=specs2-2
...
Finish running spec in Jep=jep.Jep@21885754 / thread=specs2-2
...
[info] + run time analysis
[info]
[info]
[info] Total for specification AnalysisOrchestrationSpec
[info] Finished in 6 seconds, 217 ms
[info] 3 examples, 0 failure, 0 error
Run #2
[info] orchestration spec should
Initializing Jep in PID=22606 ...
Start running spec in Jep=jep.Jep@4f5e6090 / thread=specs2-4
...
Finish running spec in Jep=jep.Jep@4f5e6090 / thread=specs2-4
[info] + initialize module correctly
Initializing Jep in PID=22606 ...
Start running spec in Jep=jep.Jep@15c18ffa / thread=specs2-2
...
Finish running spec in Jep=jep.Jep@15c18ffa / thread=specs2-2
/development/bin/python/conda3/envs/dev-ds/lib/python3.6/site-packages/rpy2/rinterface/__init__.py:145: RRuntimeWarning:
Attaching package: ‘dplyr’
..
...
[info] + extract some feature
Initializing Jep in PID=22606 ...
Start running spec in Jep=jep.Jep@4481e8e3 / thread=specs2-2
...
Finish running spec in Jep=jep.Jep@4481e8e3 / thread=specs2-2
...
[info] + run time analysis
[info]
You can see that for each run the PID is different but within each run it is same: the Specs2 code for all 3 tests I have run in the same process.
side note: This is important because I need them to share the same jep
instance.
My end goal is to use them in a monad
much like this is done in HaskellR / R AND HASKELL: BEST OF BOTH WORLDS WITH HASKELLR
.
So, again, I do not mind a JVM is spawn/fork for each spec but wanted to make sure all tests in a spec share the same JVM/process (state).
Interesting how thread specs2-2
is reused for the third test (within same run) - which suggests the Jep
handles the thread reuse (detach/attach) correctly - right?
Thoughts?
Found another way to break it
private def runNumpyTest(implicit jep1: Jep): Unit = {
jep1.close()
var jep = new Jep()
jep.eval("import numpy")
jep.eval("numpy.ndarray([1]).any()")
jep.close()
jep = new Jep()
jep.eval("import numpy") // <+++++ SIGSEGV here
jep.eval("numpy.ndarray([1]).any()")
jep.close()
}
# Problematic frame:
# C [multiarray.cpython-36m-x86_64-linux-gnu.so+0x13ea38] PyArray_Item_INCREF+0xe8
#
Inspired from something Nate did. Do not really need this to work - hope not! - but might help narrow down the cause.
UPDATE: SIGSEGV happens even if I comment out the second import, at the next line, same "problematic frame". This may hint at the fact that the import from previous jep instance is still around.
Is this one of the know numpy
related issues or a problem with jep
?
Have been able to share a jep instance from the same thread between Python scripts calling R through rpy2
and did not get a crash.
18/04/07 04:23:15 INFO orchestration: Accessing jep instance in PID / thread = 30807/specs2-1.
18/04/07 04:23:15 INFO orchestration: Initializing new instance ...
18/04/07 04:23:16 INFO orchestration: Executing in thread / Jep = specs2-1 / jep.Jep@28a3fbfe
18/04/07 04:23:16 INFO orchestration: Executing /python/init_test.py
18/04/07 04:23:16 INFO orchestration: Accessing jep instance in PID / thread = 30807/specs2-1.
18/04/07 04:23:16 INFO orchestration: Executing in thread / Jep = specs2-1 / jep.Jep@28a3fbfe
18/04/07 04:23:16 INFO orchestration: Executing /python/extract_features.py
18/04/07 04:23:17 INFO orchestration: Accessing jep instance in PID / thread = 30807/specs2-1.
18/04/07 04:23:17 INFO orchestration: Executing in thread / Jep = specs2-1 / jep.Jep@28a3fbfe
18/04/07 04:23:17 INFO orchestration: Executing /python/time_analysis.py
18/04/07 04:23:21 INFO orchestration: Accessing jep instance in PID / thread = 30807/specs2-1.
@bsteffensmeier
Jep is built off of python sub-interpreters which are a rarely used feature of cpython so extension writers often write code that is not compatible with sub-interpreters.
My understanding is that C-extensions are shared across the sub-interpreters, and because of this the exact way to develop with sub-interpreters in mind is a non-trivial affair (and an improvement the subject of a PEP under discussion - https://www.python.org/dev/peps/pep-0554/).
@SemanticBeeng
import rpy2.rinterface
does not initialize the embedded while import rpy2.robjects.pandas2ri
does when implicitly importing rpy2.robjects
. If you want to decompose steps, consider calling rinterface.initr()
before importing pandas2ri
(see the doc).
If I have multiple Python scripts being executed at different times in the (same) jep
instance then the idea is to call initr()
once (in only one of them) but then can use pandas2ri
in any of the Python scripts - right?
If the C-extension is shared as (I understand) the doc tells it, calling initr()
only once should work.
You can check it easily, as rpy2
is maintaining R's initialization status (since it is unfortunately not part of R's C API):
rpy2.rinterface._rinterface.is_initialized()
However, calling initr()
multiple times should not matter, unless initialization options are changed between calls and the code relies on the changes: R will only be initialized the first time initr()
is called.
Do not really need this to work - hope not! - but might help narrow down the cause
If you close an interpreter using numpy and you are not using shared modules then there are many ways to crash or generate errors. This is caused by numpy storing static references to python objects, there is nothing we can do in jep so you don't need to report it here.
My understanding is that C-extensions are shared across the sub-interpreters, and because of this the exact way to develop with sub-interpreters in mind is a non-trivial affair (and an improvement the subject of a PEP under discussion - https://www.python.org/dev/peps/pep-0554/).
The complexity depends on what you are trying to do, but I agree for many things it may be non-trivial. Most of my experience has been digging in numpy which has problems because they keep static references in c to python objects. When a python interpreter is closed these objects become invalid and fail if numpy is used again. PEP-3121 was intended to provide extension writers away around this particular problem but it isn't widely used. Pep-554 is interesting so I'll be curious to see how that changes things in the future.
"close an interpreter using numpy and you are not using shared modules"
Understood and am using shared modules.
At this time this code does not cause SIGSEGV.
protected def runNumpyTest(implicit jep: Jep): Int = {
printContext
jep.eval("import numpy")
jep.eval("numpy.ndarray([1]).any()")
jep.eval("import numpy")
jep.eval("numpy.ndarray([1]).any()")
0
}
Will test more closing jep
instances in relation to numpy
(with shared modules).
If it is not too much to ask, please post links to pending numpy
issues from jep
perspective.
Just for the record - I experienced the same issue with numpy - https://github.com/numpy/numpy/issues/13051#issuecomment-509713683 - I got a response but appears that it is a low priority.
Getting this from application test code. Using
specs2
to runjep
which calls a simple python script.From sbt, first time it runs ok but second time it cashes with segmentation violation. Any thoughts please @ndjensen ?
It feels like sbt jvm holds the spec class loaded and this, in turn, holds the python binaries ... hanging around. I use something like
and also set
LD_LIBRARY_PATH
.Thought it might be related to this https://github.com/ninia/jep/issues/22
Reused code from https://github.com/A-OK/Snakes-and-Ladders/blob/master/scala_hosting_python/jepp/a_la_spiewak/Scalathon.scala