Open GoogleCodeExporter opened 9 years ago
[deleted comment]
Cannot reproduce on OS X 10.5 with MATLAB R2010b. Will test on other platforms
with other versions of MATLAB.
Tested with the following code:
public class TerminationTest
{
public static void main(String[] args) throws Throwable
{
MatlabProxyFactory factory = new MatlabProxyFactory();
MatlabProxy proxy = factory.getProxy();
proxy.exit();
}
}
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 3:45
Cannot reproduce on Windows Server 2008 R2 with MATLAB R2010a.
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 4:05
Cannot reproduce on Windows Server 2008 R2 with MATLAB R2007b.
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 4:10
Cannot reproduce on Debian 6.0 (kernel 2.6.32) with MATLAB R2010a.
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 4:23
Cannot reproduce on Debian 6.0 (kernel 2.6.32) with MATLAB R2010b.
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 4:26
Cannot reproduce on Debian 6.0 (kernel 2.6.32) with MATLAB R2008b.
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 4:30
I have been unable to reproduce the bug on any configuration I have immediate
access to. Anyone capable of reproducing this bug, please run the following
code and post the resulting print out. This will allow me to determine which
thread(s) is/are preventing JVM termination.
import matlabcontrol.MatlabProxy;
import matlabcontrol.MatlabProxyFactory;
public class TerminationTest
{
public static void main(String[] args) throws Throwable
{
listThreads("Before");
MatlabProxyFactory factory = new MatlabProxyFactory();
MatlabProxy proxy = factory.getProxy();
listThreads("During");
proxy.exit();
listThreads("Immediately After");
Thread.sleep(10000);
listThreads("After 10 Seconds");
}
private static void listThreads(String title)
{
ThreadGroup group = Thread.currentThread().getThreadGroup();
while(true)
{
ThreadGroup parent = group.getParent();
if(parent == null)
{
break;
}
else
{
group = parent;
}
}
System.out.println(title);
Thread[] threads = new Thread[1000];
int numThreads = group.enumerate(threads, true);
for(int i = 0; i < numThreads; i++)
{
Thread thread = threads[i];
if(thread.isAlive() && !thread.isDaemon())
{
System.out.println("\t" + thread.toString());
}
}
}
}
Original comment by nonot...@gmail.com
on 31 Jul 2011 at 5:04
You need to look into other groups too and not ignore Daemons (Timers, RMI
threads).
Another way to check memory leaks is to use Profiler (I use NetBeans but it
should exist in other IDE too).
Concerning daemons, I should explain a little bit: When you create a timer
(CheckConnectionTask) it creates a daemon thread which survive until all
reference to it are released. Here, a thread Timer-0 is created and, if you
launch getProxy() twice, you will create another thread (Timer-1, ...). None of
them are never released as there is always a reference somewhere (in RMI
probably) to RemoteMathlabProxy.
But, if you remove the reference RemoteMatlabProxy._connectionTimer and just
create your timer task by "new Timer().schedule(new CheckConnectionTask(),
...)" then the daemon thread disappear immediately. This prove the
RemoteMatlabProxy is never freeing (RemoteMatlabProxy.finalize() is never
called).
Here is my test code and associated results:
public static void main(String[] args) throws Throwable
{
listThreadsPGL("Before");
MatlabProxyFactory factory = new MatlabProxyFactory();
MatlabProxy proxy = factory.getProxy();
listThreadsPGL("During");
proxy.exit();
listThreadsPGL("Immediately After");
Thread.sleep(10000);
listThreadsPGL("After 10 Seconds");
}
private static void listThreadsPGL(String title)
{
// Find root ThreadGroup
ThreadGroup group = Thread.currentThread().getThreadGroup();
while (group.getParent() != null)
group = group.getParent();
// Show all Threads in all groups
System.out.println(title);
ThreadGroup allGroups[] = new ThreadGroup[1000];
int nGroups = group.enumerate(allGroups, true);
for (int i = 0; i < nGroups; i++) {
Thread[] threads = new Thread[1000];
int nThreads = allGroups[i].enumerate(threads, true);
for(int j = 0; j < nThreads; j++) {
System.out.println("\t" + threads[j].toString());
}
}
}
Results:
Before
Thread[main,5,main]
During
Thread[main,5,main]
Thread[RMI TCP Connection(1)-127.0.0.1,5,RMI Runtime]
Thread[RMI TCP Connection(2)-127.0.0.1,5,RMI Runtime]
Thread[Timer-0,5,RMI Runtime]
Immediately After
Thread[main,5,main]
Thread[RMI TCP Connection(1)-127.0.0.1,5,RMI Runtime]
Thread[RMI TCP Connection(2)-127.0.0.1,5,RMI Runtime]
Thread[Timer-0,5,RMI Runtime]
After 10 Seconds
Thread[main,5,main]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[Timer-0,5,RMI Runtime]
Original comment by Philippe...@gmail.com
on 1 Aug 2011 at 8:25
Same demonstration with an even more exhaustive threads list:
private static void listThreadsPGL(String title)
{
// Show all Threads
System.out.println(title);
for (Thread thread : Thread.getAllStackTraces().keySet()) {
System.out.println("\t" + thread.toString());
}
}
Result:
Before
Thread[Signal Dispatcher,9,system]
Thread[main,5,main]
Thread[Attach Listener,5,system]
Thread[Reference Handler,10,system]
Thread[Finalizer,8,system]
During
Thread[Timer-0,5,RMI Runtime]
Thread[RMI Reaper,5,system]
Thread[Attach Listener,5,system]
Thread[Signal Dispatcher,9,system]
Thread[main,5,main]
Thread[GC Daemon,2,system]
Thread[Reference Handler,10,system]
Thread[RMI TCP Accept-0,5,system]
Thread[RMI TCP Connection(1)-127.0.0.1,5,RMI Runtime]
Thread[Finalizer,8,system]
Thread[RMI TCP Connection(2)-127.0.0.1,5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:2747,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI Scheduler(0),5,system]
Immediately After
Thread[Timer-0,5,RMI Runtime]
Thread[RMI Reaper,5,system]
Thread[Attach Listener,5,system]
Thread[Signal Dispatcher,9,system]
Thread[main,5,main]
Thread[GC Daemon,2,system]
Thread[Reference Handler,10,system]
Thread[RMI TCP Accept-0,5,system]
Thread[RMI TCP Connection(1)-127.0.0.1,5,RMI Runtime]
Thread[Finalizer,8,system]
Thread[RMI TCP Connection(2)-127.0.0.1,5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:2747,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI Scheduler(0),5,system]
After 10 Seconds
Thread[Timer-0,5,RMI Runtime]
Thread[Attach Listener,5,system]
Thread[Signal Dispatcher,9,system]
Thread[main,5,main]
Thread[GC Daemon,2,system]
Thread[Reference Handler,10,system]
Thread[RMI TCP Accept-0,5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[Finalizer,8,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:2747,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI Scheduler(0),5,system]
Original comment by Philippe...@gmail.com
on 1 Aug 2011 at 8:39
That's a far more efficient way of printing out all of the threads! (It is OK
to ignore daemon threads in this case; daemon threads do not prevent JVM
termination.)
A fix has been found and committed as r454. (Coincidentally enough, ~7 minutes
before your post.) You are correct that the issue is in RemoteMatlabProxy's
connection timer. The issue is that the timer was not being cancelled when it
was done, instead the task was - which did not terminate the timer's thread. As
the timer's thread is not a daemon thread, it was capable of preventing JVM
termination.
The issue has been resolved in trunk which is what will be the v5 release. I
will work on incorporating this fix and a few other small improvements into a
v4 release sometime soon.
Original comment by nonot...@gmail.com
on 1 Aug 2011 at 8:47
I need to not ignore daemon threads as I want to incorporate your library in a
Java Application Server (J2EE bean) which use a single JVM for many many many
Matlab launches...
If daemon threads survive, it will consume a lot of memory at the end.
Can you have a look on this issue too please ?
I try your r454 version right now to check the new current status of this issue.
Thanks !
Original comment by Philippe...@gmail.com
on 1 Aug 2011 at 8:52
[deleted comment]
matlabcontrol does not intentionally create any "orphaned" threads, daemon or
otherwise, inside of a JVM running outside of MATLAB. No threads run longer
than they need to. Although, once the first proxy has been requested RMI will
continue to run daemon threads indefinitely - which will be reused, so repeated
proxy requests will not lead to an ever increasing number of threads.
Inside of MATLAB, matlabcontrol does create a number of threads which can only be terminated by exiting MATLAB. However, as MATLAB is a GUI Java application which only terminates when exited by a user or programmatically, this should not be an issue.
Be aware that r454 is part of what will be v5 of matlabcontrol, it is *not*
compatible with the current release of matlabcontrol (which is v4).
Original comment by nonot...@gmail.com
on 1 Aug 2011 at 8:59
Using r454.
When launching 3 times the "getProxy(), exit()" sequence with the same
MatlabProxyFactory, the result is that there is one new thread after each
sequence (ie, memory leak still exist but much less than in V4.0.0).
When the MatlabProxyFactory is finally released, the total active threads fall
to a reasonable value.
To sum up:
Start = 5 threads
Create 1 factory, After 3 getProxy/exit, release factory = 16 threads
Create 1 factory, After 3 getProxy/exit, release factory = 21 threads
Create 1 factory, After 3 getProxy/exit, release factory = 24 threads
After 10secs = 22 threads
Main issue are the "RMI RenewClean" threads that never end. Have a look in
forums about them (RMI callback issue):
http://community.jboss.org/message/219930
I think that, if you fix this RMI issue you will not have to kill timer thread
(it will terminate by itself when RemoteProxy is finalized (I successfully
tested it in V4.0.0))
Code used:
public static void main(String[] args) throws Throwable
{
listThreadsPGL("Before");
for (int i=0; i<3; i++) {
MatlabProxyFactory factory = new MatlabProxyFactory();
for (int j=0; j<3; j++) {
MatlabProxy proxy = factory.getProxy();
// listThreadsPGL("During " + i + "-"+ j);
proxy.exit();
// Run garbage collector and wait 1 sec
for (int k = 0; k<10; k++) { int dummy[] = new int[5000000]; }
Thread.sleep(1000);
}
listThreadsPGL("Immediately After " + i + "-x");
}
// Run garbage collector
for (int k = 0; k<10; k++) { int dummy[] = new int[5000000]; }
Thread.sleep(10000);
listThreadsPGL("After 10 Seconds");
}
private static void listThreadsPGL(String title)
{
// Show all Threads
Set<Thread> threads = Thread.getAllStackTraces().keySet();
System.out.println(title + " (" + threads.size() + " threads)");
for (Thread thread : threads) {
System.out.println("\t" + thread.toString());
}
}
Detailed log:
Before (5 threads)
Thread[Reference Handler,10,system]
Thread[Finalizer,8,system]
Thread[Signal Dispatcher,9,system]
Thread[main,5,main]
Thread[Attach Listener,5,system]
Immediately After 0-x (16 threads)
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3663,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI Scheduler(0),5,system]
Thread[Attach Listener,5,system]
Thread[MLC Connection Listener PROXY_REMOTE_aa682187-3a08-4546-87bd-eb8adb5317cd,5,RMI Runtime]
Thread[RMI Reaper,5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
Thread[main,5,main]
Thread[RMI RenewClean-[10.10.144.62:3692,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Finalizer,8,system]
Thread[GC Daemon,2,system]
Thread[Reference Handler,10,system]
Thread[Signal Dispatcher,9,system]
Thread[RMI RenewClean-[10.10.144.62:3679,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Accept-2100,5,system]
Immediately After 1-x (21 threads)
Thread[RMI TCP Connection(7)-127.0.0.1,5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3663,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Attach Listener,5,system]
Thread[RMI Scheduler(0),5,system]
Thread[RMI Reaper,5,system]
Thread[RMI RenewClean-[10.10.144.62:3655,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3732,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
Thread[main,5,main]
Thread[RMI RenewClean-[10.10.144.62:3692,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3784,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Finalizer,8,system]
Thread[GC Daemon,2,system]
Thread[RMI RenewClean-[10.10.144.62:3707,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Reference Handler,10,system]
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI RenewClean-[10.10.144.62:3679,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[MLC Connection Listener PROXY_REMOTE_2f641ab8-da90-492d-ac84-0b6d7d6b99d0,5,RMI Runtime]
Immediately After 2-x (24 threads)
Thread[RMI TCP Connection(7)-127.0.0.1,5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3663,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Attach Listener,5,system]
Thread[RMI Scheduler(0),5,system]
Thread[RMI Reaper,5,system]
Thread[RMI RenewClean-[10.10.144.62:3655,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3732,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
Thread[RMI RenewClean-[10.10.144.62:3861,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[main,5,main]
Thread[RMI RenewClean-[10.10.144.62:3692,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3784,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3824,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3846,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Finalizer,8,system]
Thread[GC Daemon,2,system]
Thread[RMI RenewClean-[10.10.144.62:3707,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[MLC Connection Listener PROXY_REMOTE_f54fdb0c-750e-410d-9a0b-c38eea04b0f2,5,RMI Runtime]
Thread[Reference Handler,10,system]
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI RenewClean-[10.10.144.62:3679,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
After 10 Seconds (22 threads)
Thread[RMI TCP Connection(7)-127.0.0.1,5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3663,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Attach Listener,5,system]
Thread[RMI Scheduler(0),5,system]
Thread[RMI RenewClean-[10.10.144.62:3655,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3732,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
Thread[RMI RenewClean-[10.10.144.62:3861,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[main,5,main]
Thread[RMI RenewClean-[10.10.144.62:3692,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:3784,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3824,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[RMI RenewClean-[10.10.144.62:3846,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Finalizer,8,system]
Thread[GC Daemon,2,system]
Thread[RMI RenewClean-[10.10.144.62:3707,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Thread[Reference Handler,10,system]
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI RenewClean-[10.10.144.62:3679,matlabcontrol.LocalHostRMIHelper$LocalHostRMISocketFactory@5],5,system]
Original comment by Philippe...@gmail.com
on 1 Aug 2011 at 9:32
I was entirely unaware of this issue with the RMI RenewClean thread
accumulation. So my previous statement of there not being an ever increasing
number of RMI threads being created...was entirely incorrect!
I will be looking into this, thanks for bringing it to my attention.
Original comment by nonot...@gmail.com
on 1 Aug 2011 at 9:45
RMI RenewClean threads are being kept for matlabcontrol.MatlabSessionImpl and
matlabcontrol.JMIWrapperRemoteImpl. Both of these classes have instances
exported in MATLAB's JVM. The external Java application *does* unexport its
reference to JMIWrapperRemote and does not hold a reference to
MatlabSessionImpl. It's possible that these threads will go away after the
distributed garbage collector reaps them (it has a default of 10 minutes). More
investigation is needed.
Original comment by nonot...@gmail.com
on 1 Aug 2011 at 10:55
I added GC logs (VM option -verbose:gc) and forced calls to Finalization and GC.
I confirm the related 9 threads will never die even in r456 (logs show the GC
has run in its "Full" version for each call to gc() and once more
automatically).
Code:
public static void main(String[] args) throws Throwable
{
listThreadsPGL("Before");
for (int i=0; i<3; i++) {
MatlabProxyFactory factory = new MatlabProxyFactory();
for (int j=0; j<3; j++) {
MatlabProxy proxy = factory.getProxy();
// listThreadsPGL("During " + i + "-"+ j);
proxy.exit();
// Run garbage collector and wait 1 sec
Runtime.getRuntime().runFinalization();
Runtime.getRuntime().gc();
Thread.sleep(1000);
}
listThreadsPGL("Immediately After " + i + "-x");
}
// Run garbage collector
Runtime.getRuntime().runFinalization();
Runtime.getRuntime().gc();
Thread.sleep(10000);
listThreadsPGL("After 10 Seconds");
}
private static void listThreadsPGL(String title)
{
// Show all Threads
Set<Thread> threads = Thread.getAllStackTraces().keySet();
System.out.println(title + " (" + threads.size() + " threads)");
for (Thread thread : threads) {
System.out.println("\t" + thread.toString());
}
}
Results:
Before (5 threads)
Thread[Finalizer,8,system]
Thread[Signal Dispatcher,9,system]
Thread[main,5,main]
Thread[Reference Handler,10,system]
Thread[Attach Listener,5,system]
[Full GC 1247K->205K(15872K), 0.0526757 secs]
[Full GC 1361K->370K(15936K), 0.0335775 secs]
[Full GC 903K->449K(15936K), 0.0303195 secs]
[Full GC 992K->334K(15936K), 0.0365139 secs]
Immediately After 0-x (16 threads)
Thread[RMI RenewClean-[10.10.144.62:2447,MLC localhost Socket Factory],5,system]
Thread[GC Daemon,2,system]
Thread[main,5,main]
Thread[Reference Handler,10,system]
Thread[RMI Scheduler(0),5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[Signal Dispatcher,9,system]
Thread[RMI RenewClean-[10.10.144.62:2430,MLC localhost Socket Factory],5,system]
Thread[Attach Listener,5,system]
Thread[MLC Connection Listener PROXY_REMOTE_68f3b7bb-4496-44c2-9d6a-20cd0b9adc55,5,RMI Runtime]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI RenewClean-[10.10.144.62:2465,MLC localhost Socket Factory],5,system]
Thread[RMI Reaper,5,system]
Thread[Finalizer,8,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
[Full GC 1354K->458K(15936K), 0.0300792 secs]
[Full GC 1371K->543K(15936K), 0.0307754 secs]
[Full GC 1470K->629K(15936K), 0.0319879 secs]
Immediately After 1-x (21 threads)
Thread[RMI RenewClean-[10.10.144.62:2447,MLC localhost Socket Factory],5,system]
Thread[GC Daemon,2,system]
Thread[RMI RenewClean-[10.10.144.62:2528,MLC localhost Socket Factory],5,system]
Thread[main,5,main]
Thread[Reference Handler,10,system]
Thread[RMI Scheduler(0),5,system]
Thread[RMI RenewClean-[10.10.144.62:2505,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:2430,MLC localhost Socket Factory],5,system]
Thread[Attach Listener,5,system]
Thread[RMI RenewClean-[10.10.144.62:2390,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI Reaper,5,system]
Thread[RMI RenewClean-[10.10.144.62:2465,MLC localhost Socket Factory],5,system]
Thread[Finalizer,8,system]
Thread[RMI RenewClean-[10.10.144.62:2482,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Connection(7)-127.0.0.1,5,RMI Runtime]
Thread[MLC Connection Listener PROXY_REMOTE_38cc9f41-85ff-4868-894d-deccbe426d18,5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
[Full GC 1626K->404K(15936K), 0.0334400 secs]
[Full GC 1296K->489K(15936K), 0.0345455 secs]
[Full GC 1423K->575K(15936K), 0.0312389 secs]
Immediately After 2-x (24 threads)
Thread[RMI RenewClean-[10.10.144.62:2447,MLC localhost Socket Factory],5,system]
Thread[GC Daemon,2,system]
Thread[RMI RenewClean-[10.10.144.62:2551,MLC localhost Socket Factory],5,system]
Thread[RMI RenewClean-[10.10.144.62:2528,MLC localhost Socket Factory],5,system]
Thread[main,5,main]
Thread[Reference Handler,10,system]
Thread[MLC Connection Listener PROXY_REMOTE_0ad937cf-cf77-4810-b4f9-c27d99658c3d,5,RMI Runtime]
Thread[RMI Scheduler(0),5,system]
Thread[RMI RenewClean-[10.10.144.62:2505,MLC localhost Socket Factory],5,system]
Thread[RMI RenewClean-[10.10.144.62:2567,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:2430,MLC localhost Socket Factory],5,system]
Thread[Attach Listener,5,system]
Thread[RMI RenewClean-[10.10.144.62:2390,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI Reaper,5,system]
Thread[RMI RenewClean-[10.10.144.62:2465,MLC localhost Socket Factory],5,system]
Thread[RMI RenewClean-[10.10.144.62:2599,MLC localhost Socket Factory],5,system]
Thread[Finalizer,8,system]
Thread[RMI RenewClean-[10.10.144.62:2482,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Connection(7)-127.0.0.1,5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
[Full GC 804K->576K(15936K), 0.0336026 secs]
After 10 Seconds (22 threads)
Thread[RMI RenewClean-[10.10.144.62:2447,MLC localhost Socket Factory],5,system]
Thread[GC Daemon,2,system]
Thread[RMI RenewClean-[10.10.144.62:2551,MLC localhost Socket Factory],5,system]
Thread[RMI RenewClean-[10.10.144.62:2528,MLC localhost Socket Factory],5,system]
Thread[main,5,main]
Thread[Reference Handler,10,system]
Thread[RMI Scheduler(0),5,system]
Thread[RMI RenewClean-[10.10.144.62:2505,MLC localhost Socket Factory],5,system]
Thread[RMI RenewClean-[10.10.144.62:2567,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Connection(idle),5,RMI Runtime]
Thread[RMI RenewClean-[10.10.144.62:2430,MLC localhost Socket Factory],5,system]
Thread[Attach Listener,5,system]
Thread[RMI RenewClean-[10.10.144.62:2390,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Accept-2100,5,system]
Thread[RMI RenewClean-[10.10.144.62:2465,MLC localhost Socket Factory],5,system]
Thread[RMI RenewClean-[10.10.144.62:2599,MLC localhost Socket Factory],5,system]
Thread[Finalizer,8,system]
Thread[RMI RenewClean-[10.10.144.62:2482,MLC localhost Socket Factory],5,system]
Thread[RMI TCP Connection(7)-127.0.0.1,5,RMI Runtime]
Thread[RMI TCP Accept-0,5,system]
Original comment by Philippe...@gmail.com
on 1 Aug 2011 at 11:50
[deleted comment]
To summarize what is going on:
matlabcontrol v4.0.0 creates at least one thread each time a proxy is created,
and these threads do not terminate when the proxy is no longer connected to
MATLAB (as they should). There are two aspects to this.
The first is that the proxy uses a timer to check if it is still connected;
this timer was not properly being terminated. Because the timer was being
created on a daemon thread, the timer's thread was a daemon thread. A fix for
this has been checked in to trunk, and can be easily incorporated into a v4
release.
The second aspect is that when connecting to a new session of MATLAB, RMI
creates a RenewClean thread which appears to correspond to the remote objects
exported from the MATLAB JVM. This thread is never terminated. As such each
additional session of MATLAB that is launched leads to one additional orphaned
daemon thread. This behavior does not occur when reconnecting to a previously
controlled session of MATLAB. No solution has yet been found, but is quite
possibly related to properly unexporting these objects. (Attempts to do so have
so far not lead to the desired result.)
Because these additional threads are daemon threads they do not prevent JVM
termination; which lead to the earlier confusion on my part - I was
misinterpreting what was meant by the threads not dying (as all threads die
when the JVM terminates). These two bugs can cause problems for applications
that launch and exit a large number of MATLAB sessions. Those launching and
exiting only a few MATLAB sessions will experience a performance hit due to the
additional threads, but this bug should not prevent their application from
operating as expected.
Original comment by nonot...@gmail.com
on 1 Aug 2011 at 7:47
Original comment by nonot...@gmail.com
on 26 Feb 2012 at 9:45
Original issue reported on code.google.com by
Philippe...@gmail.com
on 29 Jul 2011 at 1:08