SonarSource / docker-sonarqube

:whale: SonarQube in Docker
https://hub.docker.com/_/sonarqube/
GNU Lesser General Public License v3.0
1.38k stars 1.02k forks source link

Can not work on k8s #58

Closed BruceZu closed 7 years ago

BruceZu commented 7 years ago

[root@k8s-09 ~]# kubectl logs sonarqube-1-2134697319-d4sha

2016.12.05 22:02:21 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2016.12.05 22:02:21 INFO  app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonarqube/temp/sq-process3775526920578968941properties
2016.12.05 22:02:22 INFO   es[][o.s.p.ProcessEntryPoint]  Starting es
2016.12.05 22:02:22 INFO   es[][o.s.s.EsSettings]  Elasticsearch listening on /127.0.0.1:9001
2016.12.05 22:02:22 INFO   es[][o.elasticsearch.node]  [sonarqube] version[2.3.3], pid[66], build[218bdf1/2016-05-17T15:40:04Z]
2016.12.05 22:02:22 INFO   es[][o.elasticsearch.node]  [sonarqube] initializing ...
2016.12.05 22:02:22 INFO   es[][o.e.plugins]  [sonarqube] modules [], plugins [], sites []
2016.12.05 22:02:22 INFO   es[][o.elasticsearch.env]  [sonarqube] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/mapper/centos-root)]], net usable_space [40.5gb], net total_space [49.9gb], spins? [possibly], types [xfs]
2016.12.05 22:02:22 INFO   es[][o.elasticsearch.env]  [sonarqube] heap size [989.8mb], compressed ordinary object pointers [true]
2016.12.05 22:02:23 INFO   es[][o.elasticsearch.node]  [sonarqube] initialized
2016.12.05 22:02:23 INFO   es[][o.elasticsearch.node]  [sonarqube] starting ...
2016.12.05 22:02:24 INFO   es[][o.e.transport]  [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2016.12.05 22:02:24 INFO   es[][o.e.discovery]  [sonarqube] sonarqube/Sy25XaejRS6er-jz8RhESw
2016.12.05 22:02:27 INFO   es[][o.e.cluster.service]  [sonarqube] new_master {sonarqube}{Sy25XaejRS6er-jz8RhESw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2016.12.05 22:02:27 INFO   es[][o.elasticsearch.node]  [sonarqube] started
2016.12.05 22:02:27 INFO   es[][o.e.gateway]  [sonarqube] recovered [0] indices into cluster_state
2016.12.05 22:02:27 INFO  app[][o.s.p.m.Monitor] Process[es] is up
2016.12.05 22:02:27 INFO  app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/h2/h2-1.3.176.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process348451910190347240properties
2016.12.05 22:02:28 INFO  web[][o.s.p.ProcessEntryPoint] Starting web
2016.12.05 22:02:28 INFO  web[][o.s.s.a.TomcatContexts] Webapp directory: /opt/sonarqube/web
2016.12.05 22:02:28 INFO  web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.05 22:02:28 INFO  web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2016.12.05 22:02:29 INFO  web[][o.e.plugins] [Masked Rose] modules [], plugins [], sites []
2016.12.05 22:02:29 INFO  web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2016.12.05 22:02:29 INFO  web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.1 / dc148a71a1c184ccad588b66251980c994879dff
2016.12.05 22:02:29 INFO  web[][o.s.s.p.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://localhost:9092/sonar
2016.12.05 22:02:30 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.api.utils.SonarException: Unable to start database
        at org.sonar.server.platform.db.EmbeddedDatabase.startServer(EmbeddedDatabase.java:82) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.db.EmbeddedDatabase.start(EmbeddedDatabase.java:61) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.db.EmbeddedDatabaseFactory.start(EmbeddedDatabaseFactory.java:44) ~[sonar-server-6.1.jar:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_111]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_111]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_111]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_111]
        at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89) ~[picocontainer-2.15.jar:na]
        at org.sonar.core.platform.ComponentContainer$1.start(ComponentContainer.java:320) ~[sonar-core-6.1.jar:na]
        at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.behaviors.Stored.start(Stored.java:110) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767) ~[picocontainer-2.15.jar:na]
        at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:141) ~[sonar-core-6.1.jar:na]
        at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:88) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.Platform.start(Platform.java:216) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.Platform.startLevel1Container(Platform.java:175) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.Platform.init(Platform.java:90) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:44) ~[sonar-server-6.1.jar:na]
        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: org.h2.jdbc.JdbcSQLException: IO Exception: "java.net.UnknownHostException: sonarqube-1-2134697319-d4sha: sonarqube-1-2134697319-d4sha: Name or service not known" [90028-176]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:344) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.message.DbException.get(DbException.java:167) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.message.DbException.convert(DbException.java:286) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.util.NetUtils.getLocalAddress(NetUtils.java:269) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.server.TcpServer.getURL(TcpServer.java:203) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.tools.Server.start(Server.java:477) ~[h2-1.3.176.jar:1.3.176]
        at org.sonar.server.platform.db.EmbeddedDatabase.startServer(EmbeddedDatabase.java:78) ~[sonar-server-6.1.jar:na]
        ... 31 common frames omitted
Caused by: java.net.UnknownHostException: sonarqube-1-2134697319-d4sha: sonarqube-1-2134697319-d4sha: Name or service not known
        at java.net.InetAddress.getLocalHost(InetAddress.java:1505) ~[na:1.8.0_111]
        at org.h2.util.NetUtils.getLocalAddress(NetUtils.java:267) ~[h2-1.3.176.jar:1.3.176]
        ... 34 common frames omitted
Caused by: java.net.UnknownHostException: sonarqube-1-2134697319-d4sha: Name or service not known
        at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_111]
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_111]
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_111]
        at java.net.InetAddress.getLocalHost(InetAddress.java:1500) ~[na:1.8.0_111]
        ... 35 common frames omitted
        .....

[root@k8s-09 ~]# I test all available SonarQube docker image version with H2. All run into this situation and the pod is in CrashLoopBackOff

[root@k8s-09 ~]# kubectl get pods

NAME                           READY     STATUS             RESTARTS   AGE
busybox-3604520811-5ma4w       0/1       CrashLoopBackOff   1154       4d
curl-2421989462-9xhbw          1/1       Running            0          4d
gitlab-1983252133-tlaum        1/1       Running            1          14d
jenkins-2377375696-8qevu       1/1       Running            0          7d
jenkins-two-2811552478-2n462   1/1       Running            1          7d
nexus-1437449777-a11o6         1/1       Running            0          11d
nexus-two-3664299243-fe5cu     1/1       Running            0          6d
sonarqube-1-2134697319-d4sha   0/1       CrashLoopBackOff   1192       4d

[root@k8s-09 ~]#

I find k8s-dns is in unstable status in my k8s cluster. the reason is under checking see https://github.com/kubernetes/kubernetes/issues/37833 in minikube image is running without any problem and I list the env of container there as https://gist.github.com/BruceZu/fbf4b60a2abd615e051f2730118097ef

[root@k8s-09 ~]# kubectl get pod

NAME                           READY     STATUS             RESTARTS   AGE
busybox-3604520811-5ma4w       0/1       CrashLoopBackOff   1417       5d
curl-2421989462-9xhbw          1/1       Running            0          5d
gitlab-1983252133-tlaum        1/1       Running            1          15d
jenkins-2377375696-8qevu       1/1       Running            0          8d
jenkins-two-2811552478-2n462   1/1       Running            1          8d
nexus-1437449777-a11o6         1/1       Running            0          12d
nexus-two-3664299243-fe5cu     1/1       Running            0          7d
sonarqube-1-3688817312-73wy3   0/1       CrashLoopBackOff   151        19h

[root@k8s-09 ~]#

[root@k8s-09 ~]# kubectl get deployment sonarqube-1 -o yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "3"
  creationTimestamp: 2016-11-30T23:42:07Z
  generation: 6
  labels:
    run: sonarqube-1
  name: sonarqube-1
  namespace: default
  resourceVersion: "2948488"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/sonarqube-1
  uid: 9ae0a63a-b756-11e6-9595-ecf4bbc78ce4
spec:
  replicas: 1
  selector:
    matchLabels:
      run: sonarqube-1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: sonarqube-1
    spec:
      containers:
      - image: sonarqube:latest
        imagePullPolicy: Always
        name: sonarqube
        ports:
        - containerPort: 9000
          protocol: TCP
        - containerPort: 9092
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  observedGeneration: 6
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

[root@k8s-09 ~]# kubectl describe pod sonarqube-1-2134697319-d4sha

Name:           sonarqube-1-2134697319-d4sha
Namespace:      default
Node:           k8s-04.huaweilab.com/10.145.101.41
Start Time:     Wed, 30 Nov 2016 18:28:33 -0800
Labels:         pod-template-hash=2134697319
                run=sonarqube-1
Status:         Running
IP:             10.32.0.4
Controllers:    ReplicaSet/sonarqube-1-2134697319
Containers:
  sonarqube:
    Container ID:       docker://1beba77ea09f146d0bca557d092f1e4f9dbb114311795d2b95d77a9d55937b5c
    Image:              sonarqube:latest
    Image ID:           docker://sha256:7333743a8ff3a257d351307a74123f4e7c46933dae4af4aacc5020fc1a6e328f
    Ports:              9000/TCP, 9092/TCP
    State:              Waiting
      Reason:           CrashLoopBackOff
    Last State:         Terminated
      Reason:           Completed
      Exit Code:        0
      Started:          Thu, 01 Dec 2016 10:57:30 -0800
      Finished:         Thu, 01 Dec 2016 10:57:40 -0800
    Ready:              False
    Restart Count:      180
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ti679 (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         False
  PodScheduled  True
Volumes:
  default-token-ti679:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-ti679
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath                   Type            Reason          Message
  ---------     --------        -----   ----                            -------------                   --------        ------          -------
  15h           <invalid>       181     {kubelet k8s-04.huaweilab.com}  spec.containers{sonarqube}      Normal          Pulling         pulling image "sonarqube:latest"
  15h           <invalid>       181     {kubelet k8s-04.huaweilab.com}  spec.containers{sonarqube}      Normal          Pulled          Successfully pulled image "sonarqube:latest"
  15h           <invalid>       172     {kubelet k8s-04.huaweilab.com}  spec.containers{sonarqube}      Normal          Created         (events with common reason combined)
  15h           <invalid>       172     {kubelet k8s-04.huaweilab.com}  spec.containers{sonarqube}      Normal          Started         (events with common reason combined)
  15h           <invalid>       4055    {kubelet k8s-04.huaweilab.com}  spec.containers{sonarqube}      Warning         BackOff         Back-off restarting failed docker container
  15h           <invalid>       4029    {kubelet k8s-04.huaweilab.com}                                  Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=sonarqube pod=sonarqube-1-2134697319-d4sha_default(47fcae7f-b765-11e6-9595-ecf4bbc78ce4)"

But I can deploy GitLab, Jenkins and Nexus successfully. So is there some work around that can avoid hitting the k8s-dns issue just like other image. Thank you!

BruceZu commented 7 years ago

This is the point from where crash happens 2016.12.05 22:38:15 INFO web[][o.s.s.p.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://localhost:9092/sonar

Godin commented 7 years ago

@BruceZu could you please provide exact steps of how to reproduce your issue?

BruceZu commented 7 years ago

Hi Godin, Thank you so much to shed some light here:) I use a YAML file to create the deployment. Now I redo it: delete the old deployment and thus also kill the pod then start with: [root@k8s-09 ~]# kubectl create -f sonaqube.yaml deployment "sonarqube-1" created [root@k8s-09 ~]# ls sonaqube.yaml sonaqube.yaml [root@k8s-09 ~]# kubectl get pod

NAME                           READY     STATUS             RESTARTS   AGE
busybox-3604520811-5ma4w       0/1       CrashLoopBackOff   1430       5d
curl-2421989462-9xhbw          1/1       Running            0          5d
gitlab-1983252133-tlaum        1/1       Running            1          15d
jenkins-2377375696-8qevu       1/1       Running            0          8d
jenkins-two-2811552478-2n462   1/1       Running            1          8d
nexus-1437449777-a11o6         1/1       Running            0          12d
nexus-two-3664299243-fe5cu     1/1       Running            0          7d
sonarqube-1-3688817312-lc59x   1/1       Running            0          11s

[root@k8s-09 ~]# kubectl attach sonarqube-1-3688817312-lc59x

If you don't see a command prompt, try pressing enter.

2016.12.06 20:45:35 INFO   es[][o.elasticsearch.node]  [sonarqube] version[2.3.3], pid[64], build[218bdf1/2016-05-17T15:40:04Z]
2016.12.06 20:45:35 INFO   es[][o.elasticsearch.node]  [sonarqube] initializing ...
2016.12.06 20:45:35 INFO   es[][o.e.plugins]  [sonarqube] modules [], plugins [], sites []
2016.12.06 20:45:35 INFO   es[][o.elasticsearch.env]  [sonarqube] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/mapper/centos-root)]], net usable_space [29.8gb], net total_space [49.9gb], spins? [possibly], types [xfs]
2016.12.06 20:45:35 INFO   es[][o.elasticsearch.env]  [sonarqube] heap size [989.8mb], compressed ordinary object pointers [true]
2016.12.06 20:45:36 INFO   es[][o.elasticsearch.node]  [sonarqube] initialized
2016.12.06 20:45:36 INFO   es[][o.elasticsearch.node]  [sonarqube] starting ...
2016.12.06 20:45:36 INFO   es[][o.e.transport]  [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2016.12.06 20:45:36 INFO   es[][o.e.discovery]  [sonarqube] sonarqube/uZgYcTq0SGq6RaMb9f62gw
2016.12.06 20:45:39 INFO   es[][o.e.cluster.service]  [sonarqube] new_master {sonarqube}{uZgYcTq0SGq6RaMb9f62gw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2016.12.06 20:45:39 INFO   es[][o.elasticsearch.node]  [sonarqube] started
2016.12.06 20:45:39 INFO   es[][o.e.gateway]  [sonarqube] recovered [0] indices into cluster_state
2016.12.06 20:45:40 INFO  app[][o.s.p.m.Monitor] Process[es] is up
2016.12.06 20:45:40 INFO  app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/h2/h2-1.3.176.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process488989491811009689properties
2016.12.06 20:46:21 INFO  web[][o.s.p.ProcessEntryPoint] Starting web
2016.12.06 20:46:21 INFO  web[][o.s.s.a.TomcatContexts] Webapp directory: /opt/sonarqube/web
2016.12.06 20:46:21 INFO  web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.06 20:46:21 INFO  web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2016.12.06 20:46:22 INFO  web[][o.e.plugins] [Locust] modules [], plugins [], sites []
2016.12.06 20:46:42 INFO  web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2016.12.06 20:46:42 INFO  web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.1 / dc148a71a1c184ccad588b66251980c994879dff
2016.12.06 20:46:43 INFO  web[][o.s.s.p.d.EmbeddedDatabase] Starting embedded database on port 9092 with url jdbc:h2:tcp://localhost:9092/sonar
2016.12.06 20:47:03 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.api.utils.SonarException: Unable to start database
        at org.sonar.server.platform.db.EmbeddedDatabase.startServer(EmbeddedDatabase.java:82) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.db.EmbeddedDatabase.start(EmbeddedDatabase.java:61) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.db.EmbeddedDatabaseFactory.start(EmbeddedDatabaseFactory.java:44) ~[sonar-server-6.1.jar:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_111]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_111]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_111]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_111]
        at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89) ~[picocontainer-2.15.jar:na]
        at org.sonar.core.platform.ComponentContainer$1.start(ComponentContainer.java:320) ~[sonar-core-6.1.jar:na]
        at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.behaviors.Stored.start(Stored.java:110) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009) ~[picocontainer-2.15.jar:na]
        at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767) ~[picocontainer-2.15.jar:na]
        at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:141) ~[sonar-core-6.1.jar:na]
        at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:88) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.Platform.start(Platform.java:216) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.Platform.startLevel1Container(Platform.java:175) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.Platform.init(Platform.java:90) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:44) ~[sonar-server-6.1.jar:na]
        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398) [tomcat-embed-core-8.0.32.jar:8.0.32]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: org.h2.jdbc.JdbcSQLException: IO Exception: "java.net.UnknownHostException: sonarqube-1-3688817312-lc59x: sonarqube-1-3688817312-lc59x: Name or service not known" [90028-176]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:344) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.message.DbException.get(DbException.java:167) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.message.DbException.convert(DbException.java:286) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.util.NetUtils.getLocalAddress(NetUtils.java:269) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.server.TcpServer.getURL(TcpServer.java:203) ~[h2-1.3.176.jar:1.3.176]
        at org.h2.tools.Server.start(Server.java:477) ~[h2-1.3.176.jar:1.3.176]
        at org.sonar.server.platform.db.EmbeddedDatabase.startServer(EmbeddedDatabase.java:78) ~[sonar-server-6.1.jar:na]
        ... 31 common frames omitted
Caused by: java.net.UnknownHostException: sonarqube-1-3688817312-lc59x: sonarqube-1-3688817312-lc59x: Name or service not known
        at java.net.InetAddress.getLocalHost(InetAddress.java:1505) ~[na:1.8.0_111]
        at org.h2.util.NetUtils.getLocalAddress(NetUtils.java:267) ~[h2-1.3.176.jar:1.3.176]
        ... 34 common frames omitted
Caused by: java.net.UnknownHostException: sonarqube-1-3688817312-lc59x: Name or service not known
        at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_111]
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_111]
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_111]
        at java.net.InetAddress.getLocalHost(InetAddress.java:1500) ~[na:1.8.0_111]
        ... 35 common frames omitted
2016.12.06 20:47:03 ERROR web[][o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2016.12.06 20:47:03 ERROR web[][o.a.c.c.StandardContext] Context [] startup failed due to previous errors
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.lang.Thread.sleep(Native Method)
 org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:719)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#2]{New I/O worker #2}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#3]{New I/O worker #3}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#4]{New I/O worker #4}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#5]{New I/O worker #5}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#6]{New I/O worker #6}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#7]{New I/O worker #7}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#8]{New I/O worker #8}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#9]{New I/O worker #9}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#10]{New I/O worker #10}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#11]{New I/O worker #11}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#12]{New I/O worker #12}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#13]{New I/O worker #13}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#14]{New I/O worker #14}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#15]{New I/O worker #15}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#16]{New I/O worker #16}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#17]{New I/O worker #17}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#18]{New I/O worker #18}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#19]{New I/O worker #19}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#20]{New I/O worker #20}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#21]{New I/O worker #21}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#22]{New I/O worker #22}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#23]{New I/O worker #23}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#24]{New I/O worker #24}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#25]{New I/O worker #25}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#26]{New I/O worker #26}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#27]{New I/O worker #27}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#28]{New I/O worker #28}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#29]{New I/O worker #29}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#30]{New I/O worker #30}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#31]{New I/O worker #31}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#32]{New I/O worker #32}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#33]{New I/O worker #33}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#34]{New I/O worker #34}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#35]{New I/O worker #35}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#36]{New I/O worker #36}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#37]{New I/O worker #37}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#38]{New I/O worker #38}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#39]{New I/O worker #39}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#40]{New I/O worker #40}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#41]{New I/O worker #41}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#42]{New I/O worker #42}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#43]{New I/O worker #43}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#44]{New I/O worker #44}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#45]{New I/O worker #45}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#46]{New I/O worker #46}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#47]{New I/O worker #47}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_worker][T#48]{New I/O worker #48}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_boss][T#1]{New I/O boss #49}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
 org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][transport_client_timer][T#1]{Hashed wheel timer #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.lang.Thread.sleep(Native Method)
 org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
 org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 WARN  web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Locust][generic][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
2016.12.06 20:47:03 INFO  web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.06 20:47:03 INFO  web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.12.06 20:47:03 INFO  web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.12.06 20:47:03 WARN  web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
        at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
        at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
        at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
        at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.12.06 20:47:03 INFO  web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.06 20:47:23 INFO  web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.06 20:50:56 INFO  web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.06 20:50:56 INFO  web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.12.06 20:50:56 INFO  app[][o.s.p.m.Monitor] Process[es] is stopping
2016.12.06 20:50:56 INFO   es[][o.s.p.StopWatcher]  Stopping process
2016.12.06 20:50:56 INFO   es[][o.elasticsearch.node]  [sonarqube] stopping ...
2016.12.06 20:50:56 INFO   es[][o.elasticsearch.node]  [sonarqube] stopped
2016.12.06 20:50:56 INFO   es[][o.elasticsearch.node]  [sonarqube] closing ...
2016.12.06 20:50:56 INFO   es[][o.elasticsearch.node]  [sonarqube] closed
2016.12.06 20:50:57 INFO  app[][o.s.p.m.Monitor] Process[es] is stopped

[root@k8s-09 ~]# cat sonaqube.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sonarqube-1
spec:
  replicas: 1
  selector:
    matchLabels:
      run: sonarqube-1
  template:
    metadata:
      labels:
        run: sonarqube-1
    spec:
      containers:
      - image: sonarqube
        imagePullPolicy: Always
        name: sonarqube
        ports:
        - containerPort: 9000
          protocol: TCP
        - containerPort: 9092
          protocol: TCP

[root@k8s-09 ~]# The yaml file works on minikube of my MAC laptop. From the log we can see Caused by: org.h2.jdbc.JdbcSQLException: IO Exception: "java.net.UnknownHostException: sonarqube-1-3688817312-lc59x: sonarqube-1-3688817312-lc59x: Name or service not known" [90028-176]

the 'sonarqube-1-3688817312-lc59x' is the pod name From the env we can see it 'HOSTNAME=sonarqube-1-3688817312-lc59x'

[root@k8s-09 ~]# kubectl exec sonarqube-1-3688817312-lc59x -it -- env | sort

CA_CERTIFICATES_JAVA_VERSION=20140324
GITLAB_PORT_22_TCP_ADDR=10.108.63.168
GITLAB_PORT_22_TCP_PORT=22
GITLAB_PORT_22_TCP_PROTO=tcp
GITLAB_PORT_22_TCP=tcp://10.108.63.168:22
GITLAB_PORT_443_TCP_ADDR=10.108.63.168
GITLAB_PORT_443_TCP_PORT=443
GITLAB_PORT_443_TCP_PROTO=tcp
GITLAB_PORT_443_TCP=tcp://10.108.63.168:443
GITLAB_PORT_80_TCP_ADDR=10.108.63.168
GITLAB_PORT_80_TCP_PORT=80
GITLAB_PORT_80_TCP_PROTO=tcp
GITLAB_PORT_80_TCP=tcp://10.108.63.168:80
GITLAB_PORT=tcp://10.108.63.168:443
GITLAB_SERVICE_HOST=10.108.63.168
GITLAB_SERVICE_PORT=443
GITLAB_SERVICE_PORT_HTTP=80
GITLAB_SERVICE_PORT_HTTPS=443
GITLAB_SERVICE_PORT_SSH=22
HOME=/root
HOSTNAME=sonarqube-1-3688817312-lc59x
JAVA_DEBIAN_VERSION=8u111-b14-2~bpo8+1
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
JAVA_VERSION=8u111
JENKINS_PORT_50000_TCP_ADDR=10.106.147.38
JENKINS_PORT_50000_TCP_PORT=50000
JENKINS_PORT_50000_TCP_PROTO=tcp
JENKINS_PORT_50000_TCP=tcp://10.106.147.38:50000
JENKINS_PORT_8080_TCP_ADDR=10.106.147.38
JENKINS_PORT_8080_TCP_PORT=8080
JENKINS_PORT_8080_TCP_PROTO=tcp
JENKINS_PORT_8080_TCP=tcp://10.106.147.38:8080
JENKINS_PORT=tcp://10.106.147.38:8080
JENKINS_SERVICE_HOST=10.106.147.38
JENKINS_SERVICE_PORT=8080
JENKINS_SERVICE_PORT_FOR_SLAVE=50000
JENKINS_SERVICE_PORT_HTTP=8080
JENKINS_TWO_PORT_50000_TCP_ADDR=10.96.162.178
JENKINS_TWO_PORT_50000_TCP_PORT=50000
JENKINS_TWO_PORT_50000_TCP_PROTO=tcp
JENKINS_TWO_PORT_50000_TCP=tcp://10.96.162.178:50000
JENKINS_TWO_PORT_8080_TCP_ADDR=10.96.162.178
JENKINS_TWO_PORT_8080_TCP_PORT=8080
JENKINS_TWO_PORT_8080_TCP_PROTO=tcp
JENKINS_TWO_PORT_8080_TCP=tcp://10.96.162.178:8080
JENKINS_TWO_PORT=tcp://10.96.162.178:8080
JENKINS_TWO_SERVICE_HOST=10.96.162.178
JENKINS_TWO_SERVICE_PORT=8080
JENKINS_TWO_SERVICE_PORT_PORT_1=8080
JENKINS_TWO_SERVICE_PORT_PORT_2=50000
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LANG=C.UTF-8
NEXUS_PORT_8081_TCP_ADDR=10.110.143.13
NEXUS_PORT_8081_TCP_PORT=8081
NEXUS_PORT_8081_TCP_PROTO=tcp
NEXUS_PORT_8081_TCP=tcp://10.110.143.13:8081
NEXUS_PORT=tcp://10.110.143.13:8081
NEXUS_SERVICE_HOST=10.110.143.13
NEXUS_SERVICE_PORT=8081
NEXUS_TWO_PORT_8081_TCP_ADDR=10.103.102.74
NEXUS_TWO_PORT_8081_TCP_PORT=8081
NEXUS_TWO_PORT_8081_TCP_PROTO=tcp
NEXUS_TWO_PORT_8081_TCP=tcp://10.103.102.74:8081
NEXUS_TWO_PORT=tcp://10.103.102.74:8081
NEXUS_TWO_SERVICE_HOST=10.103.102.74
NEXUS_TWO_SERVICE_PORT=8081
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SONARQUBE_HOME=/opt/sonarqube
SONARQUBE_JDBC_PASSWORD=sonar
SONARQUBE_JDBC_URL=
SONARQUBE_JDBC_USERNAME=sonar
SONAR_VERSION=6.1

Now it has restarts 3 times in 12 minutes [root@k8s-09 ~]# kubectl get pod

NAME                           READY     STATUS             RESTARTS   AGE
busybox-3604520811-5ma4w       0/1       CrashLoopBackOff   1433       5d
curl-2421989462-9xhbw          1/1       Running            0          5d
gitlab-1983252133-tlaum        1/1       Running            1          15d
jenkins-2377375696-8qevu       1/1       Running            0          8d
jenkins-two-2811552478-2n462   1/1       Running            1          8d
nexus-1437449777-a11o6         1/1       Running            0          12d
nexus-two-3664299243-fe5cu     1/1       Running            0          7d
sonarqube-1-3688817312-lc59x   1/1       Running            3          12m

I want to check the content of hosts but failed to access root@sonarqube-1-3688817312-lc59x:~# cat /etc/hosts cat: /etc/hosts: Permission denied When the pod becomes CrashLoopBackOff I can not access it, need wait a while when it becomes running then I try to ping the pod name and failed [root@k8s-09 ~]# kubectl get pod

NAME                           READY     STATUS             RESTARTS   AGE
busybox-3604520811-5ma4w       0/1       CrashLoopBackOff   1435       5d
curl-2421989462-9xhbw          1/1       Running            0          5d
gitlab-1983252133-tlaum        1/1       Running            1          15d
jenkins-2377375696-8qevu       1/1       Running            0          8d
jenkins-two-2811552478-2n462   1/1       Running            1          8d
nexus-1437449777-a11o6         1/1       Running            0          13d
nexus-two-3664299243-fe5cu     1/1       Running            0          7d
sonarqube-1-3688817312-lc59x   0/1       CrashLoopBackOff   5          24m

[root@k8s-09 ~]# kubectl exec sonarqube-1-3688817312-lc59x -it ping the pod name sonarqube-1-3688817312-lc59x error: Internal error occurred: error executing command in container: container not found ("sonarqube") [root@k8s-09 ~]# kubectl get pod

NAME                           READY     STATUS             RESTARTS   AGE
busybox-3604520811-5ma4w       0/1       CrashLoopBackOff   1435       5d
curl-2421989462-9xhbw          1/1       Running            0          5d
gitlab-1983252133-tlaum        1/1       Running            1          15d
jenkins-2377375696-8qevu       1/1       Running            0          8d
jenkins-two-2811552478-2n462   1/1       Running            1          8d
nexus-1437449777-a11o6         1/1       Running            0          13d
nexus-two-3664299243-fe5cu     1/1       Running            0          7d
sonarqube-1-3688817312-lc59x   1/1       Running            6          25m

[root@k8s-09 ~]# kubectl exec sonarqube-1-3688817312-lc59x -it ping sonarqube-1-3688817312-lc59x ping: unknown host [root@k8s-09 ~]# kubectl describe pod sonarqube-1-3688817312-lc59x

Name:           sonarqube-1-3688817312-lc59x
Namespace:      default
Node:           k8s-08.huaweilab.com/10.145.101.81
Start Time:     Tue, 06 Dec 2016 12:44:30 -0800
Labels:         pod-template-hash=3688817312
                run=sonarqube-1
Status:         Running
IP:             10.40.0.4
Controllers:    ReplicaSet/sonarqube-1-3688817312
Containers:
  sonarqube:
    Container ID:       docker://b813d2320a0c706c3421a462253936539098ea5791e8955258699afc25a4032d
    Image:              sonarqube
    Image ID:           docker://sha256:7333743a8ff3a257d351307a74123f4e7c46933dae4af4aacc5020fc1a6e328f
    Ports:              9000/TCP, 9092/TCP
    State:              Waiting
      Reason:           CrashLoopBackOff
    Last State:         Terminated
      Reason:           Completed
      Exit Code:        0
      Started:          Tue, 06 Dec 2016 13:34:25 -0800
      Finished:         Tue, 06 Dec 2016 13:37:36 -0800
    Ready:              False
    Restart Count:      9
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ti679 (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         False
  PodScheduled  True
Volumes:
  default-token-ti679:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-ti679
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath                   Type            Reason          Message
  ---------     --------        -----   ----                            -------------                   --------        ------          -------
  53m           53m             1       {default-scheduler }                                            Normal          Scheduled       Successfully assigned sonarqube-1-3688817312-lc59x to k8s-08.huaweilab.com
  52m           52m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal          Created         Created container with docker id 48a2c33aa8ed; Security:[seccomp=unconfined]
  52m           52m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal          Started         Started container with docker id 48a2c33aa8ed
  49m           49m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal          Started         Started container with docker id 57e7c812bc21
  49m           49m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal          Created         Created container with docker id 57e7c812bc21; Security:[seccomp=unconfined]
  46m           46m             1       {kubelet k8s-08.huaweilab.com}                                  Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 10s restarting failed container=sonarqube pod=sonarqube-1-3688817312-lc59x_default(a7015932-bbf4-11e6-9595-ecf4bbc78ce4)"

  45m   45m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id 80e308eb018a
  45m   45m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id 80e308eb018a; Security:[seccomp=unconfined]
  42m   42m     2       {kubelet k8s-08.huaweilab.com}                                  Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 20s restarting failed container=sonarqube pod=sonarqube-1-3688817312-lc59x_default(a7015932-bbf4-11e6-9595-ecf4bbc78ce4)"

  42m   42m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id f8815ebb7c47
  42m   42m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id f8815ebb7c47; Security:[seccomp=unconfined]
  38m   38m     3       {kubelet k8s-08.huaweilab.com}                                  Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 40s restarting failed container=sonarqube pod=sonarqube-1-3688817312-lc59x_default(a7015932-bbf4-11e6-9595-ecf4bbc78ce4)"

  38m   38m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id 919361b401ae
  38m   38m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id 919361b401ae; Security:[seccomp=unconfined]
  34m   33m     7       {kubelet k8s-08.huaweilab.com}                                  Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=sonarqube pod=sonarqube-1-3688817312-lc59x_default(a7015932-bbf4-11e6-9595-ecf4bbc78ce4)"

  33m   33m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id d80d20c62918; Security:[seccomp=unconfined]
  33m   33m     1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id d80d20c62918
  30m   27m     12      {kubelet k8s-08.huaweilab.com}                                  Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=sonarqube pod=sonarqube-1-3688817312-lc59x_default(a7015932-bbf4-11e6-9595-ecf4bbc78ce4)"

  27m   27m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id d45a0ac0c31a; Security:[seccomp=unconfined]
  27m   27m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id d45a0ac0c31a
  19m   19m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id 496a17175a74; Security:[seccomp=unconfined]
  19m   19m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id 496a17175a74
  10m   10m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         Created container with docker id 50c33f2d75e0; Security:[seccomp=unconfined]
  10m   10m             1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         Started container with docker id 50c33f2d75e0
  52m   2m              10      {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Pulling         pulling image "sonarqube"
  2m    2m              1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Started         (events with common reason combined)
  2m    2m              1       {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Created         (events with common reason combined)
  52m   2m              10      {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Normal  Pulled          Successfully pulled image "sonarqube"
  24m   <invalid>       70      {kubelet k8s-08.huaweilab.com}                                  Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "sonarqube" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=sonarqube pod=sonarqube-1-3688817312-lc59x_default(a7015932-bbf4-11e6-9595-ecf4bbc78ce4)"

  46m   <invalid>       95      {kubelet k8s-08.huaweilab.com}  spec.containers{sonarqube}      Warning BackOff Back-off restarting failed docker container

[root@k8s-09 ~]#

I do not know why SonarQube will access sonarqube-1-3688817312-lc59x and wonder if there are some way to workaround it, E.g. Is it possible to provide some environment from kubectl run command or via yaml file to avoid using k8s-dns. If so how to do it which environment variable I should use. I do not know why Jenkins, GitLab and Nexus can work in my k8s cluster, I guess they can go ahead without using k8s-dns. I updated https://github.com/kubernetes/kubernetes/issues/37833 to give more information there. I would like provide more information here too. Thank you again! Bruce

Godin commented 7 years ago

@BruceZu wild guess - maybe you hit https://jira.sonarsource.com/browse/SONAR-8285 , so please try to set -Dh2.bindAddress=127.0.0.1 via env vatiable SONARQUBE_WEB_JVM_OPTS or via sonar.web.javaOpts in conf/sonar.properties. Also as this seems to be related to H2 embedded database that in any case should be used only for evaluation purposes but not for production - consider switch to production grade database.

BruceZu commented 7 years ago

@Godin Thank you so much! It works. :+1: The detail steps are as the following and hope it is helpful for others with the same issue

1> Update the configure file directly in container using kubectl exec <pod name> -it --bash I login the container I find there is not vi, vim, nano and there is not sudo , yum or apt-get install can run (It would be nice to provide any of these kind text update tools :)) I find the item sonar.web.javaOpts in the conf/sonar.properties file is commented out. so I run

echo '\n' >> /opt/sonarqube/conf/sonar.properties
echo 'sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dh2.bindAddress=127.0.0.1' >> /opt/sonarqube/conf/sonar.properties

Then check cat /opt/sonarqube/conf/sonar.properties |grep web.javaOpts.

But this way cannot work in my case. My pod is running without state as I did not provide the volume when I create the deployment, then pod, by kubectl run or by yaml file. So in my case once the pod becomes 'CrashLoopBackOff' (after the container is teminated) k8s will recreate the container and any update in the container will be lost. But this way sure will work if the container would run with state by provide volume.

2> Deploy and specify the environment variable

I try this: kubectl run sonarqube --image=sonarqube --port=9092 --env="SONARQUBE_WEB_JVM_OPTS=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dh2.bindAddress=127.0.0.1"

Then I update the deployment file to open 2 ports

        - containerPort: 9000
          protocol: TCP
        - containerPort: 9092
          protocol: TCP

Then I watch the pod by kubectl attach <pod name>

It works for me. I see at last it get over the issue and the web is up now.

2016.12.07 18:34:19 INFO  web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.12.07 18:34:19 INFO  web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.12.07 18:34:19 INFO  app[][o.s.p.m.Monitor] Process[web] is up
2016.12.07 18:34:19 INFO  app[][o.s.p.m.JavaProcessLauncher] Launch process[ce]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:./lib/ce/*:/opt/sonarqube/lib/jdbc/h2/h2-1.3.176.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process8411639701109711000properties
2016.12.07 18:35:00 INFO  ce[][o.s.p.ProcessEntryPoint] Starting ce
2016.12.07 18:35:00 INFO  ce[][o.s.ce.app.CeServer] Compute Engine starting up...
2016.12.07 18:35:00 INFO  ce[][o.e.plugins] [Reaper] modules [], plugins [], sites []
2016.12.07 18:35:21 INFO  ce[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2016.12.07 18:35:21 INFO  ce[][o.sonar.db.Database] Create JDBC data source for jdbc:h2:tcp://localhost:9092/sonar
2016.12.07 18:36:02 WARN  ce[][o.s.d.DatabaseChecker] H2 database should be used for evaluation purpose only
2016.12.07 18:36:03 INFO  ce[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
2016.12.07 18:36:03 INFO  ce[][o.s.c.c.CePluginRepository] Load plugins
2016.12.07 18:36:04 INFO  ce[][o.s.s.c.q.PurgeCeActivities] Delete the Compute Engine tasks created before Fri Jun 10 18:36:04 UTC 2016
2016.12.07 18:36:04 INFO  ce[][o.s.ce.app.CeServer] Compute Engine is up
2016.12.07 18:36:04 INFO  app[][o.s.p.m.Monitor] Process[ce] is up

check the env content in container

[root@k8s-09 ~]# kubectl exec sonarqube-2950109087-q3nir -it -- bash root@sonarqube-2950109087-q3nir:/opt/sonarqube# env | sort

CA_CERTIFICATES_JAVA_VERSION=20140324
GITLAB_PORT=tcp://10.108.63.168:443
GITLAB_PORT_22_TCP=tcp://10.108.63.168:22
GITLAB_PORT_22_TCP_ADDR=10.108.63.168
GITLAB_PORT_22_TCP_PORT=22
GITLAB_PORT_22_TCP_PROTO=tcp
GITLAB_PORT_443_TCP=tcp://10.108.63.168:443
GITLAB_PORT_443_TCP_ADDR=10.108.63.168
GITLAB_PORT_443_TCP_PORT=443
GITLAB_PORT_443_TCP_PROTO=tcp
GITLAB_PORT_80_TCP=tcp://10.108.63.168:80
GITLAB_PORT_80_TCP_ADDR=10.108.63.168
GITLAB_PORT_80_TCP_PORT=80
GITLAB_PORT_80_TCP_PROTO=tcp
GITLAB_SERVICE_HOST=10.108.63.168
GITLAB_SERVICE_PORT=443
GITLAB_SERVICE_PORT_HTTP=80
GITLAB_SERVICE_PORT_HTTPS=443
GITLAB_SERVICE_PORT_SSH=22
HOME=/root
HOSTNAME=sonarqube-2950109087-q3nir
JAVA_DEBIAN_VERSION=8u111-b14-2~bpo8+1
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
JAVA_VERSION=8u111
JENKINS_PORT=tcp://10.106.147.38:8080
JENKINS_PORT_50000_TCP=tcp://10.106.147.38:50000
JENKINS_PORT_50000_TCP_ADDR=10.106.147.38
JENKINS_PORT_50000_TCP_PORT=50000
JENKINS_PORT_50000_TCP_PROTO=tcp
JENKINS_PORT_8080_TCP=tcp://10.106.147.38:8080
JENKINS_PORT_8080_TCP_ADDR=10.106.147.38
JENKINS_PORT_8080_TCP_PORT=8080
JENKINS_PORT_8080_TCP_PROTO=tcp
JENKINS_SERVICE_HOST=10.106.147.38
JENKINS_SERVICE_PORT=8080
JENKINS_SERVICE_PORT_FOR_SLAVE=50000
JENKINS_SERVICE_PORT_HTTP=8080
JENKINS_TWO_PORT=tcp://10.96.162.178:8080
JENKINS_TWO_PORT_50000_TCP=tcp://10.96.162.178:50000
JENKINS_TWO_PORT_50000_TCP_ADDR=10.96.162.178
JENKINS_TWO_PORT_50000_TCP_PORT=50000
JENKINS_TWO_PORT_50000_TCP_PROTO=tcp
JENKINS_TWO_PORT_8080_TCP=tcp://10.96.162.178:8080
JENKINS_TWO_PORT_8080_TCP_ADDR=10.96.162.178
JENKINS_TWO_PORT_8080_TCP_PORT=8080
JENKINS_TWO_PORT_8080_TCP_PROTO=tcp
JENKINS_TWO_SERVICE_HOST=10.96.162.178
JENKINS_TWO_SERVICE_PORT=8080
JENKINS_TWO_SERVICE_PORT_PORT_1=8080
JENKINS_TWO_SERVICE_PORT_PORT_2=50000
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LANG=C.UTF-8
NEXUS_PORT=tcp://10.110.143.13:8081
NEXUS_PORT_8081_TCP=tcp://10.110.143.13:8081
NEXUS_PORT_8081_TCP_ADDR=10.110.143.13
NEXUS_PORT_8081_TCP_PORT=8081
NEXUS_PORT_8081_TCP_PROTO=tcp
NEXUS_SERVICE_HOST=10.110.143.13
NEXUS_SERVICE_PORT=8081
NEXUS_TWO_PORT=tcp://10.103.102.74:8081
NEXUS_TWO_PORT_8081_TCP=tcp://10.103.102.74:8081
NEXUS_TWO_PORT_8081_TCP_ADDR=10.103.102.74
NEXUS_TWO_PORT_8081_TCP_PORT=8081
NEXUS_TWO_PORT_8081_TCP_PROTO=tcp
NEXUS_TWO_SERVICE_HOST=10.103.102.74
NEXUS_TWO_SERVICE_PORT=8081
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/opt/sonarqube
SHLVL=1
SONARQUBE_HOME=/opt/sonarqube
SONARQUBE_JDBC_PASSWORD=sonar
SONARQUBE_JDBC_URL=
SONARQUBE_JDBC_USERNAME=sonar
SONARQUBE_WEB_JVM_OPTS=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dh2.bindAddress=127.0.0.1
SONAR_VERSION=6.1
_=/usr/bin/env

The yaml file sonaqube.yaml shoulbe be (did not test yet) cat sonaqube.yaml


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sonarqube-1
spec:
  replicas: 1
  selector:
    matchLabels:
      run: sonarqube-1
  template:
    metadata:
      labels:
        run: sonarqube-1
    spec:
      containers:
      - env:
        - name: SONARQUBE_WEB_JVM_OPTS
          value: -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dh2.bindAddress=127.0.0.1
        image: sonarqube
        imagePullPolicy: Always
        name: sonarqube
        ports:
        - containerPort: 9092
          protocol: TCP
        - containerPort: 9000
          protocol: TCP
BruceZu commented 7 years ago

@Godin By the way, if I use postgreSQL should be the value of -Dh2.bindAddress=127.0.0.1 be changed to be -Dpostgresql.bindAddress=127.0.0.1 ? Before I using the default H2 I ever try PostgreSQL and got the same error.

Godin commented 7 years ago

@BruceZu no - if you will use PostgreSQL the value h2.bindAddress won't be needed at all, and that was my initial point