Open yujie123jw opened 7 years ago
Hi yujie123jw,
What is the version of IoT server you are running? If OOM has been encountered, wso2 servers automatically dump the heap dump in the location of CARBON_HOME/repository/logs/heap-dump.hprof. Can you please provide us the heap dump for further analysis.
If you are using IoTv 3.0.0 then the heap dump for the broker will be in the path = IOT_HOME/broker/repository/logs/heap-dump.hprof.
If you are using IoTv3.1.0 then the heap dump for the broker will be in the path = IOT_HOME/wso2/broker/repository/logs/heap-dump.hprof.
i 'm sorry heap-dump.hprof is too big!! and It's garbled. [wso2@webapp1 logs]$ du -sh heap-dump.hprof 4.8G heap-dump.hprof [wso2@webapp1 logs]$
But I changed to broker 3.2.0 release, it looks steady and has been running for one day.
Oh ok. I think here the problem could be related to 150 concurrent client connections, which we need to simulate, test and observe. Anyhow did you increase the allocated memory for the broker JVM? How much memory have you allocated now? - This can be found in the startup script in the format of " -Xms256m -Xmx1024m -XX:MaxPermSize=512m \" . Please share us that detail too.
I tried to adjust these parameters, but it is not valid. do $JAVACMD \ -Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \ -Xms2048m -Xmx4096m -XX:MaxPermSize=256m \ -XX:-UseGCOverheadLimit \ -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \ $JAVA_OPTS \ -DandesConfig=broker.xml \ -Dcom.sun.management.jmxremote \ -classpath "$CARBON_CLASSPATH" \ -Djava.endorsed.dirs="$JAVA_ENDORSED_DIRS" \ -Djava.io.tmpdir="$CARBON_HOME/tmp" \ -Dcatalina.base="$CARBON_HOME/lib/tomcat" \ -Dwso2.server.standalone=true \ -Dcarbon.registry.root=/ \ -Djava.command="$JAVACMD" \ -Dcarbon.home="$CARBON_HOME" \ -Dlogger.server.name="IoT-Broker" \ -Djava.util.logging.config.file="$CARBON_HOME/repository/conf/log4j.properties" \ -Dcarbon.config.dir.path="$CARBON_HOME/repository/conf" \ -Dcomponents.repo="$CARBON_HOME/repository/components/plugins" \ -Dconf.location="$CARBON_HOME/repository/conf"\ -Dcom.atomikos.icatch.file="$CARBON_HOME/lib/transactions.properties" \ -Dcom.atomikos.icatch.hide_init_file_path=true \ -Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false \ -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true \ -Dcom.sun.jndi.ldap.connect.pool.authentication=simple \ -Dcom.sun.jndi.ldap.connect.pool.timeout=3000 \ -Dorg.terracotta.quartz.skipUpdateCheck=true \ -Djava.security.egd=file:/dev/./urandom \ -Dfile.encoding=UTF8 \ -Djava.net.preferIPv4Stack=true \ -Dcom.ibm.cacheLocalHost=true \ org.wso2.carbon.bootstrap.Bootstrap $* status=$? done
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
This issue has been automatically marked as stale because it has not had recent activity.
@yujie123jw What did you mean by the term 'invalid', any observations?
java running add parameters “ -XX:-UseGCOverheadLimit ” to “ broker heap space out of bounds for Java ”,this is invalid!!
I think where the code is a small problem, high concurrency will be a problem!
This issue has been automatically marked as stale because it has not had recent activity.
computer momory is 16G ,linux centos 6.5, cpu :3core ,client 150 concurrent (in other computer) , After two hours Message broker heap space out of bounds for Java!!
TID: [] [] [2017-05-11 02:10:52,685] ERROR {org.wso2.andes.kernel.slot.SlotDeliveryWorker} - Error while running Slot Delivery Worker. {org.wso2.andes.kernel.slot.Sl otDeliveryWorker} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:04:31,632] ERROR {org.dna.mqtt.moquette.server.netty.metrics.MessageMetricsHandler} - Java heap space {org.dna.mqtt.moquette.server.netty.me trics.MessageMetricsHandler} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:32:02,262] ERROR {org.wso2.andes.kernel.AndesRecoveryTask} - Error in running andes recovery task {org.wso2.andes.kernel.AndesRecoveryTask} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:32:37,697] INFO {org.wso2.andes.kernel.AndesRecoveryTask} - Running DB sync task. {org.wso2.andes.kernel.AndesRecoveryTask} TID: [-1234] [] [2017-05-11 02:29:19,913] WARN {org.wso2.carbon.registry.indexing.ResourceSubmitter} - An error occurred while submitting resources for indexing {org .wso2.carbon.registry.indexing.ResourceSubmitter} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:26:10,875] WARN {io.netty.channel.socket.nio.NioServerSocketChannel} - Failed to create a new channel from an accepted socket. {io.netty.ch annel.socket.nio.NioServerSocketChannel} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:24:57,636] WARN {io.netty.channel.DefaultChannelPipeline} - An exception was thrown by a user handler's exceptionCaught() method while hand ling the following exception: {io.netty.channel.DefaultChannelPipeline} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:21:20,721] WARN {io.netty.channel.nio.NioEventLoop} - Unexpected exception in the selector loop. {io.netty.channel.nio.NioEventLoop} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:17:57,726] ERROR {org.wso2.carbon.core.util.HouseKeepingTask} - Could not run HousekeepingTask {org.wso2.carbon.core.util.HouseKeepingTask} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:16:43,421] ERROR {org.wso2.carbon.core.multitenancy.MultitenantServerManager} - Error occurred while executing tenant cleanup {org.wso2.carb on.core.multitenancy.MultitenantServerManager} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:21:08,053] ERROR {org.wso2.carbon.core.multitenancy.MultitenantServerManager} - Error occurred while executing tenant cleanup {org.wso2.carb on.core.multitenancy.MultitenantServerManager} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:21:08,053] WARN {io.netty.channel.DefaultChannelPipeline} - An exception was thrown by a user handler's exceptionCaught() method while hand ling the following exception: {io.netty.channel.DefaultChannelPipeline} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:20:52,231] ERROR {org.wso2.carbon.core.util.HouseKeepingTask} - Could not run HousekeepingTask {org.wso2.carbon.core.util.HouseKeepingTask} java.lang.OutOfMemoryError: Java heap space TID: [-1234] [] [2017-05-11 03:20:31,638] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Error while running deployment scheduler.. {org.wso 2.carbon.core.deployment.CarbonDeploymentSchedulerTask} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:20:31,638] WARN {io.netty.channel.DefaultChannelPipeline} - An exceptionCaught() event was fired, and it reached at the tail of the pipelin e. It usually means the last handler in the pipeline did not handle the exception. {io.netty.channel.DefaultChannelPipeline} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:20:31,638] ERROR {org.dna.mqtt.moquette.server.netty.NettyAcceptor} - Severe error during pipeline creation {org.dna.mqtt.moquette.server.ne tty.NettyAcceptor} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:23:04,650] WARN {io.netty.channel.ChannelInitializer} - Failed to initialize a channel. Closing: [id: 0x5b61a65c, /192.168.5.220:46802 => / 192.168.5.242:1886] {io.netty.channel.ChannelInitializer} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:17:48,015] ERROR {org.wso2.andes.kernel.slot.SlotDeliveryWorker} - Error while running Slot Delivery Worker. {org.wso2.andes.kernel.slot.Sl otDeliveryWorker} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:13:58,616] ERROR {org.dna.mqtt.moquette.server.netty.metrics.MessageMetricsHandler} - Java heap space {org.dna.mqtt.moquette.server.netty.me trics.MessageMetricsHandler} java.lang.OutOfMemoryError: Java heap space TID: [-1234] [] [2017-05-11 03:12:50,312] ERROR {org.wso2.carbon.caching.impl.CacheCleanupTask} - Error occurred while running CacheCleanupTask {org.wso2.carbon.cachi ng.impl.CacheCleanupTask} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 02:33:34,531] ERROR {org.dna.mqtt.moquette.server.netty.metrics.MessageMetricsHandler} - Received a message with fixed header flags (b) != expe cted (0) {org.dna.mqtt.moquette.server.netty.metrics.MessageMetricsHandler} io.netty.handler.codec.CorruptedFrameException: Received a message with fixed header flags (b) != expected (0) at org.dna.mqtt.moquette.parser.netty.DemuxDecoder.genericDecodeCommonHeader(DemuxDecoder.java:62) at org.dna.mqtt.moquette.parser.netty.DemuxDecoder.decodeCommonHeader(DemuxDecoder.java:44) at org.dna.mqtt.moquette.parser.netty.MessageIDDecoder.decode(MessageIDDecoder.java:36) at org.dna.mqtt.moquette.parser.netty.MQTTDecoder.decode(MQTTDecoder.java:71) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:205) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:247) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:219) at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:769) at io.netty.channel.AbstractChannel$AbstractUnsafe$5.run(AbstractChannel.java:567) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:745) TID: [] [] [2017-05-11 02:32:53,449] WARN {io.netty.channel.ChannelInitializer} - Failed to initialize a channel. Closing: [id: 0x4b946d20, /192.168.5.220:35524 => / 192.168.5.242:1886] {io.netty.channel.ChannelInitializer} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:38:28,414] ERROR {org.dna.mqtt.moquette.server.netty.metrics.MessageMetricsHandler} - java.lang.OutOfMemoryError: Java heap space {org.dna.m qtt.moquette.server.netty.metrics.MessageMetricsHandler} io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Java heap space Caused by: java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:33:16,221] ERROR {org.dna.mqtt.moquette.server.netty.NettyAcceptor} - Severe error during pipeline creation {org.dna.mqtt.moquette.server.ne tty.NettyAcceptor} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 04:30:09,863] WARN {io.netty.channel.ChannelInitializer} - Failed to initialize a channel. Closing: [id: 0xa6fd0aa1, /192.168.5.220:52088 => / 192.168.5.242:1886] {io.netty.channel.ChannelInitializer} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 03:24:27,499] INFO {org.wso2.andes.kernel.AndesRecoveryTask} - Running DB sync task. {org.wso2.andes.kernel.AndesRecoveryTask} TID: [] [] [2017-05-11 04:29:55,034] WARN {io.netty.channel.DefaultChannelPipeline} - An exceptionCaught() event was fired, and it reached at the tail of the pipelin e. It usually means the last handler in the pipeline did not handle the exception. {io.netty.channel.DefaultChannelPipeline} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 04:29:55,033] ERROR {org.wso2.andes.kernel.disruptor.LogExceptionHandler} - [ Sequence: 6579707 ] Exception occurred while processing inbound e vents.Event type: SAFE_ZONE_DECLARE_EVENT {org.wso2.andes.kernel.disruptor.LogExceptionHandler} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 04:29:46,371] ERROR {org.wso2.andes.kernel.disruptor.LogExceptionHandler} - [ Sequence: 6579707 ] Exception occurred while processing inbound e vents.Event type: SAFE_ZONE_DECLARE_EVENT {org.wso2.andes.kernel.disruptor.LogExceptionHandler} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 04:27:04,975] WARN {io.netty.channel.nio.NioEventLoop} - Unexpected exception in the selector loop. {io.netty.channel.nio.NioEventLoop} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 04:12:14,222] ERROR {org.dna.mqtt.moquette.server.netty.NettyAcceptor} - Severe error during pipeline creation {org.dna.mqtt.moquette.server.ne tty.NettyAcceptor} java.lang.OutOfMemoryError: Java heap space TID: [] [] [2017-05-11 04:11:56,952] WARN {io.netty.channel.DefaultChannelPipeline} - An exceptionCaught() event was fired, and it reached at the tail of the pipelin e. It usually means the last handler in the pipeline did not handle the exception. {io.netty.channel.DefaultChannelPipeline} java.lang.OutOfMemoryError: Java heap space