lmenezes / cerebro

MIT License
5.52k stars 716 forks source link

Cerebro Error Cannot start com.google.inject.CreationException: Unable to create injector (Slick Plugin issue or config issue?) #592

Open danielkurniadi opened 5 months ago

danielkurniadi commented 5 months ago

Description

We observed that the cerebro pod keep on crashing in the tm-gcp-emea-build-infra in namespace: logging during the start and there were issue related to one of the Slick java plugins. Checking in if this is a known issue that you've seen or is familiar with where you could point out the solution.

Oops, cannot start the server.
com.google.inject.CreationException: Unable to create injector, see the following errors:

1) Error injecting method, java.lang.ExceptionInInitializerError
  at com.google.inject.util.Providers$GuicifiedProviderWithDependencies.initialize(Providers.java:154)
  at play.api.db.slick.SlickModule.$anonfun$namedDatabaseConfigBindings$1(SlickModule.scala:46):
Binding(interface play.api.db.slick.DatabaseConfigProvider qualified with QualifierInstance(@play.db.NamedDatabase(value=default)) to ProviderTarget(play.api.db.slick.NamedDatabaseConfigProvider@45d64d27)) (via mod
ules: com.google.inject.util.Modules$OverrideModule -> play.api.inject.guice.GuiceableModuleConversions$$anon$4)

1 error
        at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)
        at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:178)
        at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
        at com.google.inject.Guice.createInjector(Guice.java:87)
        at com.google.inject.Guice.createInjector(Guice.java:78)
        at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:200)
        at 
....
        at com.google.inject.internal.Initializer.injectAll(Initializer.java:140)
        at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:176)
        ... 9 more
Caused by: java.lang.ExceptionInInitializerError
        at akka.util.Unsafe.<clinit>(Unsafe.java:52)
        at akka.dispatch.AbstractNodeQueue.<clinit>(AbstractNodeQueue.java:179)
        ... 65 more
Caused by: java.lang.NoClassDefFoundError: sun/misc/Unsafe
        at akka.util.Unsafe.<clinit>(Unsafe.java:27)
        ... 66 more
Caused by: java.lang.ClassNotFoundException: sun.misc.Unsafe
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(Unknown Source)
        at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
        ... 67 more

More logs in the gist here: https://gist.github.com/danielkurniadi/a3eb6cebdf64363ffbb9980b17714143

Configurations

The following is the configuration we added to cerebro.

apiVersion: v1
data:
  application.conf: |
    # Source https://github.com/lmenezes/cerebro/blob/4f61ddea6578c7ee1e56cea30da8558b1390d846/conf/application.conf
    secret = "ki:s:52>SOME_GIBBERISH"

    # Application base path
    basePath = "/"

    # Defaults to RUNNING_PID at the root directory of the app.
    # To avoid creating a PID file set this value to /dev/null
    pidfile.path=/dev/null

    # Rest request history max size per user
    rest.history.size = 50 // defaults to 50 if not specified

    play.ws.ssl.loose.acceptAnyCertificate = true

    # Path of local database file
    #data.path: "/var/lib/cerebro/cerebro.db"
    data.path = "./cerebro.db"
    hosts = [
      {
        host = "https://elasticsearch-logging-es-http:9200"
         name = "elasticsearch-logging"
      },
      {
        host = "https://elasticsearch-jaeger-es-http.jaeger.svc:9200"
         name = "elasticsearch-jaeger"
      },
      {
        host = "https://elasticsearch-datastore-es-http:9200"
         name = "elasticsearch-payments"
      }
    ]
kind: ConfigMap
metadata:
  labels:
    app: cerebro
    app.kubernetes.io/part-of: cerebro
    project: logging
  name: cerebro-config-d5hbb98g4d-cerebro-b757fdf3a0
  namespace: logging
  resourceVersion: "660400214"
  uid: 5a61f312-4554-4f87-8f88-c353f302736a

Can you help look and help provide more guidance on where to start looking on this? Some more information on what the issue really is can be really helpful in case you have observed or known this issue before.

Thanks 🙏

danielkurniadi commented 5 months ago

cc @lmenezes if you know or can point out quickly to any pointers. We have used this in production and now its blocking our rollout.

Here is what our workload looks like in Kubernetes. We have cerebro and we deployed elastic search for high volume data ingestion, more or less following this approach:

image

❯ kgd
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
cerebro               0/2     2            0           12d
es-logging-exporter   1/1     1            1           427d
kibana-logging-kb     1/1     1            1           427d
logging-curator       1/1     1            1           427d

❯ k get sts
NAME                                         READY   AGE
elasticsearch-logging-es-data-optimised      2/2     249d
elasticsearch-logging-es-masters-optimised   3/3     249d