Closed Delfic closed 9 years ago
Does not happen to me, those threads keep living indeed,
"nioEventLoopGroup-2-1" prio=10 tid=0x0000000015d64800 nid=0x1ec4 runnable [0x0000000016e7e000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.WindowsSelectorImpl$SubSelector.$$YJP$$poll0(Native Method)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(WindowsSelectorImpl.java)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(WindowsSelectorImpl.java:296)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(WindowsSelectorImpl.java:278)
at sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:159)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x00000007ac526498> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000007ac514950> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000007ac5147e8> (a sun.nio.ch.WindowsSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
but they are reused when connecting the next time. (unless you leak NioEventLoopGroup somewhere... are you creating new DefaultSmppClient for every rebind, without destroying the previous one?)
Thank you for your quick reply.
The client is the same throughout the lifecycle of the application. The it's only created at startup.
unless you leak NioEventLoopGroup somewhere...
how could this be happening?
*Destroy a session by ensuring the socket is closed and all
* resources are cleaned up.
This is part of the javadoc for the destroy()
method. Alas, not all the resources are cleared up since the thread used by the session is still active. And the next time my RebindTask runs and calls
Session session = client.bind(sessionConfig,sessionHandler);
a new thread is created and the previous one is not used. The client
variable is instantiated only once and passed around everywhere, for instance, my sessionHandler has this client
as a private variable so in the fireUnexpectedlyChannelClose
method it can call client.bind()
. This method will do
if(session!=null){
session.unbind(5000);
session.close();
session.destroy();
}
runRebindTask();
but they are reused when connecting the next time.
How do you connect the next time?
When you reconnect, you could create a new NioEventLoopGroup without destroying the old one, it would result in a leak as you observe. Of course It would be silly to do it intentionally.
You could have many session managed by only one worker thread from NioEventLoopGroup, doing that would be bad, but it illustrates that the thread is not owned by the session, so a session destroy does not mean that the thread should die.
But if you create only one client which uses for example new NioEventLoopGroup(100)
, then every reconnect(destroing of the session and creating of a new one) creates a new thread instead of reuising the old one, that eventually results in having 100 worker threads.
This is probably what you observe. It seems strange, and may be a bug in netty or in cloudhopper indeed.
After fireUnexpectedlyChannelClose
it seems unnecessary to call
session.unbind(5000);
session.close();
session.destroy();
should be enough. If you look into #unbind, it checks that the channel is active, so when it is closed, it cannot send any unbind request. And #close is called by #destroy
How do you connect the next time?
Look at com.cloudhopper.smpp.demo.persist.OutboundClient#reconnect
I don't even know what NioEventLoopGroup
are.
One thing I forgot to mention explicitly is that I isolated the usage of smpp to a simple project in which I connect a single client to the simulator to check if it receives messages from my web application. In this standalone the number of threads would not increase on each reconnect.
I'm having these problems under websphere 7 which uses it's own version of Java 6.
session.destroy(); should be enough
I thought so too
Look at com.cloudhopper.smpp.demo.persist.OutboundClient#reconnect
I shall have a look at this tomorrow in the morning, but for now, for what I could understand, the differences in my approach are as follows:
session=null
after calling session.destroy()
ThreadFactory
to the executorsscheduleAtFixedRate
and pass it a new instance of my RebindTask
that implements Runnable
. When it does not throw any exception, it calls a method that cancels this taks and "turns on" my ElinkTask
in the same fashion.Can any of these influence my behaviour?
Is it a problem of websphere itself? I will try running the standalone client with the jvm from websphere tomorrom.
Thanks for the help
Could you post how do you initalize the ExecutorService or NioEventLoopGroup you pass into DefaultSmppClient?
I would guess that JDK might have a bug, your approach seems correct.
I initialize it like this
this.timer = Executors.newSingleThreadScheduledExecutor();
Then I have this method
private void runRebindTask() {
this.rebindTask = this.timer.scheduleAtFixedRate(new RebindTask(this), 10, getRebindPeriod(),
TimeUnit.MILLISECONDS);
}
Just noticed something really important. In my application I'm using the latest version of netty beta. In my standalone (where there are no extra threads) I'm using 3.9.0.Final.
Going to try and change the netty version in the application to see if there's any changes.
That did it!
The problem seems to be related to the version
6.0.0-netty4-beta-2
I changed the dependency to 5.0.8 and the thread count stayed the same during sucessive bind calls.
Thank you for your help.
Should I edit this issue for it to be an issue regarding netty4?
It is a feature of netty https://github.com/netty/netty/issues/2727#issuecomment-51024739 Your thread count would stay the same with netty 4 as well, if you crated NioEventLoopGroup with a max number of sessions you want to have.
But how do I create NioEventLoopGroup? Where do I control this?
In your code, if I know I want only one session per DefaultSmppClient, then like this:
NioEventLoopGroup group = new NioEventLoopGroup(1);
DefaultSmppClient clientBootstrap = new DefaultSmppClient(group, monitorExecutor);
Got it.
Thanks again, for your help.
For my project I'm using something like the persistent-client demo.
I'm using it in an application server environment. There was a spike in thread count that I noticed examining it with jconsole. After checking and rechecking my implementation, the problem comes from cloudhopper.
I have a client and call Session session = client.bind(sessionConfig,sessionHandler);
And then use the session for my work. This happens with a real provider and also with an smpp simulator. When I restart the simulator a ChannelUnexpectedlyClosed is fired and in the handler for that I call my rebind logic.
In my rebind logic, before calling the RebindTask I call
if(session!=null){ session.unbind(5000); session.close(); session.destroy(); }
Every rebind attempt, a new thread emerges with name smppclient-1-1, then smppclient-1-2 etc.
How can I kill the threads that are left active by netty?
How can I interrupt this thread so it is not active after the session is closed?