pinpoint-apm / pinpoint

APM, (Application Performance Management) tool for large-scale distributed systems.
https://pinpoint-apm.gitbook.io/
Apache License 2.0
13.42k stars 3.75k forks source link

pinpoint-collector-2.0.1 not running. This is very likely to create a memory leak #6649

Closed nightguide closed 4 years ago

nightguide commented 4 years ago

Versions:

pinpoint-collector-2.0.1
Apache Tomcat/9.0.27
HBase 1.2.9
openjdk version "1.8.0_242"

I have trouble with running pinpoint-collector ver. 2.0.1. Please see logs.

27-Mar-2020 16:33:10.050 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/var/opt/tomcat/webapps/pinpoint-collector-2.0.1.war]
2020-03-27T16:33:15.028 PinpointCollectorApplicationInitializer::ActiveProfile:release
27-Mar-2020 16:33:15.103 INFO [main] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
log4j:WARN No appenders could be found for logger (org.springframework.web.context.ContextLoader).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
2020-03-27 16:33:21 [WARN ](tory$ClusterAcceptorMessageListener) do handShake {socketId=1, cluster={id=14322@atlas-nsk-pinpoint-01.node.atlas-nsk.cdek.tech, roles=[ROUTER]}, id=14322@atlas-nsk-pinpoint-01.node.atlas-nsk.cdek.tech}
27-Mar-2020 16:33:24.439 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
27-Mar-2020 16:33:24.440 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal Context [/pinpoint-collector-2.0.1] startup failed due to previous errors
27-Mar-2020 16:33:32.293 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [Pinpoint-SocketFactory-Timer(14-0)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.lang.Thread.sleep(Native Method)
 org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
 org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.297 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [Pinpoint-Client-Boss(15-0)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.300 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [Pinpoint-Client-Worker(16-0)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.303 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.305 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [htable-pool16-t1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.307 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.309 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-2] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.312 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-3] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.314 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-4] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.316 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-7] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.318 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-5] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.320 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-6] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.322 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-8] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.325 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [pinpoint-collector-2.0.1] appears to have started a thread named [HTableFlushWorker-9] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)
27-Mar-2020 16:33:32.327 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [pinpoint-collector-2.0.1] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@6160f7e1]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@3bd5a6ab]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
27-Mar-2020 16:33:32.348 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/var/opt/tomcat/webapps/pinpoint-collector-2.0.1.war] has finished in [22,297] ms

my pinpoint-collector.properties

pinpoint.zookeeper.address=localhost

# base data receiver config  ---------------------------------------------------------------------
collector.receiver.base.ip=0.0.0.0
collector.receiver.base.port=9994

# number of tcp worker threads
collector.receiver.base.worker.threadSize=8
# capacity of tcp worker queue
collector.receiver.base.worker.queueSize=1024
# monitoring for tcp worker
collector.receiver.base.worker.monitor=true

collector.receiver.base.request.timeout=3000
collector.receiver.base.closewait.timeout=3000
# 5 min
collector.receiver.base.ping.interval=300000
# 30 min
collector.receiver.base.pingwait.timeout=1800000

# stat receiver config  ---------------------------------------------------------------------
collector.receiver.stat.udp=true
collector.receiver.stat.udp.ip=0.0.0.0
collector.receiver.stat.udp.port=9995
collector.receiver.stat.udp.receiveBufferSize=4194304
## required linux kernel 3.9 & java 9+
collector.receiver.stat.udp.reuseport=false
## If not set, follow the cpu count automatically.
#collector.receiver.stat.udp.socket.count=1

# Should keep in mind that TCP transport load balancing is per connection.(UDP transport loadbalancing is per packet)
collector.receiver.stat.tcp=false
collector.receiver.stat.tcp.ip=0.0.0.0
collector.receiver.stat.tcp.port=9995

collector.receiver.stat.tcp.request.timeout=3000
collector.receiver.stat.tcp.closewait.timeout=3000
# 5 min
collector.receiver.stat.tcp.ping.interval=300000
# 30 min
collector.receiver.stat.tcp.pingwait.timeout=1800000

# number of udp statworker threads
collector.receiver.stat.worker.threadSize=8
# capacity of udp statworker queue
collector.receiver.stat.worker.queueSize=64
# monitoring for udp stat worker
collector.receiver.stat.worker.monitor=true

# span receiver config  ---------------------------------------------------------------------
collector.receiver.span.udp=true
collector.receiver.span.udp.ip=0.0.0.0
collector.receiver.span.udp.port=9996
collector.receiver.span.udp.receiveBufferSize=4194304
## required linux kernel 3.9 & java 9+
collector.receiver.span.udp.reuseport=false
## If not set, follow the cpu count automatically.
#collector.receiver.span.udp.socket.count=1

# Should keep in mind that TCP transport load balancing is per connection.(UDP transport loadbalancing is per packet)
collector.receiver.span.tcp=false
collector.receiver.span.tcp.ip=0.0.0.0
collector.receiver.span.tcp.port=9996

collector.receiver.span.tcp.request.timeout=3000
collector.receiver.span.tcp.closewait.timeout=3000
# 5 min
collector.receiver.span.tcp.ping.interval=300000
# 30 min
collector.receiver.span.tcp.pingwait.timeout=1800000

# number of udp statworker threads
collector.receiver.span.worker.threadSize=32
# capacity of udp statworker queue
collector.receiver.span.worker.queueSize=256
# monitoring for udp stat worker
collector.receiver.span.worker.monitor=true

# configure l4 ip address to ignore health check logs
collector.l4.ip=

# change OS level read/write socket buffer size (for linux)
#sudo sysctl -w net.core.rmem_max=
#sudo sysctl -w net.core.wmem_max=
# check current values using:
#$ /sbin/sysctl -a | grep -e rmem -e wmem

# number of agent event worker threads
collector.agentEventWorker.threadSize=4
# capacity of agent event worker queue
collector.agentEventWorker.queueSize=1024

statistics.flushPeriod=1000

# -------------------------------------------------------------------------------------------------
# The cluster related options are used to establish connections between the agent, collector, and web in order to send/receive data between them in real time.
# You may enable additional features using this option (Ex : RealTime Active Thread Chart).
# -------------------------------------------------------------------------------------------------
# Usage : Set the following options for collector/web components that reside in the same cluster in order to enable this feature.
# 1. cluster.enable (pinpoint-web.properties, pinpoint-collector.properties) - "true" to enable
# 2. cluster.zookeeper.address (pinpoint-web.properties, pinpoint-collector.properties) - address of the ZooKeeper instance that will be used to manage the cluster
# 3. cluster.web.tcp.port (pinpoint-web.properties) - any available port number (used to establish connection between web and collector)
# -------------------------------------------------------------------------------------------------
# Please be aware of the following:
#1. If the network between web, collector, and the agents are not stable, it is advisable not to use this feature.
#2. We recommend using the cluster.web.tcp.port option. However, in cases where the collector is unable to establish connection to the web, you may reverse this and make the web establish connection to the collector.
#   In this case, you must set cluster.connect.address (pinpoint-web.properties); and cluster.listen.ip, cluster.listen.port (pinpoint-collector.properties) accordingly.
cluster.enable=true
cluster.zookeeper.address=${pinpoint.zookeeper.address}
cluster.zookeeper.sessiontimeout=30000
cluster.listen.ip=
cluster.listen.port=-1

#collector.admin.password=
#collector.admin.api.rest.active=
#collector.admin.api.jmx.active=

collector.spanEvent.sequence.limit=10000

# Flink configuration
flink.cluster.enable=false
flink.cluster.zookeeper.address=${pinpoint.zookeeper.address}
flink.cluster.zookeeper.sessiontimeout=3000

I trying increase threads but it doesn’t help....

stale[bot] commented 4 years ago

This issue/proposal has been automatically marked as stale because it hasn't had any recent activity. It will automatically be closed if no further activity occurs for 20days. If you think this should still be open, or the problem still persists, just pop a reply in the comments and one of the maintainers will (try!) to follow up. Thank you for your interest and contribution to the Pinpoint Community.