gravitee-io / issues

Gravitee.io - API Platform - Issues
65 stars 26 forks source link

API Gateway puts nothing to Elasticsearch #8288

Closed sbashtyrev closed 2 years ago

sbashtyrev commented 2 years ago

:collision: Describe the bug

The gateway starts well as a Swarm service and the published APIs respond correctly in accordance with the assigned plans. But there is no logs in the API Management Console and in the Elasticsearch.

In the container log I see the looping message:

07:47:18.022 [vertx-blocked-thread-checker] [] WARN  i.v.core.impl.BlockedThreadChecker - Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 135707 ms, time limit is 60000 ms
io.vertx.core.VertxException: Thread blocked
    at java.base@17.0.4/jdk.internal.misc.Unsafe.park(Native Method)
    at java.base@17.0.4/java.util.concurrent.locks.LockSupport.park(Unknown Source)
    at java.base@17.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
    at java.base@17.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(Unknown Source)
    at java.base@17.0.4/java.util.concurrent.CountDownLatch.await(Unknown Source)
    at io.reactivex.internal.observers.BlockingMultiObserver.blockingGet(BlockingMultiObserver.java:85)
    at io.reactivex.Single.blockingGet(Single.java:2870)
    at io.gravitee.reporter.elasticsearch.ElasticsearchReporter.retrieveElasticSearchInfo(ElasticsearchReporter.java:147)
    at io.gravitee.reporter.elasticsearch.ElasticsearchReporter.doStart(ElasticsearchReporter.java:67)
    at io.gravitee.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:32)
    at io.gravitee.node.reporter.vertx.eventbus.EventBusReporterWrapper$1.handle(EventBusReporterWrapper.java:67)
    at io.gravitee.node.reporter.vertx.eventbus.EventBusReporterWrapper$1.handle(EventBusReporterWrapper.java:63)
    at io.vertx.core.impl.ContextImpl.lambda$null$0(ContextImpl.java:159)
    at io.vertx.core.impl.ContextImpl$$Lambda$1191/0x0000000801474000.handle(Unknown Source)
    at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:100)
    at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:157)
    at io.vertx.core.impl.ContextImpl$$Lambda$1190/0x000000080146ba10.run(Unknown Source)
    at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
    at io.vertx.core.impl.TaskQueue$$Lambda$290/0x0000000800f05420.run(Unknown Source)
    at java.base@17.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.base@17.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base@17.0.4/java.lang.Thread.run(Unknown Source)

Elasticsearch host is reachable from inside the container with curl https://login:password@nginx-proxy-to-elastic:443 and the certificate is well verified.

Here is my Swarm stack config:

version: '3.3'
services:
  gateway:
    image: graviteeio/apim-gateway:3.17.6-ee
    environment:
      gravitee_management_mongodb_uri: mongodb://gravitee:changeme@mongo1,mongo2,mongo3:27017/gravitee?serverSelectionTimeoutMS=5000&connectTimeoutMS=5000&socketTimeoutMS=5000
      gravitee_ratelimit_mongodb_uri: mongodb://gravitee:changeme@mongo1,mongo2,mongo3:27017/gravitee?serverSelectionTimeoutMS=5000&connectTimeoutMS=5000&socketTimeoutMS=5000
      gravitee_reporters_elasticsearch_endpoints_0: https://login:password@nginx-proxy-to-elastic:443
      gravitee_tags: swarm
      gravitee_tenant: private
    ports:
     - published: 10002
       target: 8082
       protocol: tcp
       mode: ingress
    volumes:
     - /opt/applogs/graviteeio/docker-gw:/opt/graviteeio-gateway/logs
     - /opt/graviteeio/apim/gateway/license/license.key:/opt/graviteeio-gateway/license/license.key
     - /opt/jdk/jre/lib/security/cacerts:/opt/java/openjdk/lib/security/cacerts
    networks:
     - default
    logging:
      driver: json-file
    deploy:
      replicas: 2
      placement:
        constraints:
         - node.labels.yoda-role == igw
      resources:
        reservations:
          cpus: '0.1'
        limits:
          cpus: '0.5'
networks:
  default:
    driver: overlay

Here is the part from nginx.conf:

server {
    listen          443 ssl;
    server_name     nginx-proxy-to-elastic;
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    location / {
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header X-NginX-Proxy true;
            client_max_body_size 100m;
            proxy_pass https://elasticsearch-host:9092;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_cache_bypass $http_upgrade;
    }
}

P.S. I tried to without nginx in the middle but the result is the same.

:sunrise_over_mountains: To Reproduce

Steps to reproduce the behaviour:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

:rainbow: Expected behaviour

A clear and concise description of what you expected to happen.

Current behaviour

A clear and concise description of what is currently happening.

:movie_camera: Useful information

Screenshot, video, logs, other supporting material

:computer: Desktop:

OS: RHEL 7.0 Nginx: 1.20.1 Elasticsearch: 7.17.1 (or 6.8.1) apim-gateway: 3.17.6-ee (3.14.0, 3.18.0) Docker version 20.10.17, build 100c701

:warning: Potential impacts

Which other features may be impacted by this fix. This could be populated after fix

What are the impacted versions?

:link: Dependencies

Link a story or other related things...

sbashtyrev commented 2 years ago

Deployment of v3.18.0 everyware has solved the problem.