TheHive-Project / TheHive

TheHive: a Scalable, Open Source and Free Security Incident Response Platform
https://thehive-project.org
GNU Affero General Public License v3.0
3.28k stars 606 forks source link

[Question] GKE deployment of strangebee/thehive:latest is failing "CrashLoopBackOff" #2382

Closed fp-dshim closed 2 years ago

fp-dshim commented 2 years ago

Request Type

Question

Work Environment

Question Answer
OS version (server) GKE 1.20.15
OS version (client) Linux
Virtualized Env. True
Dedicated RAM 60 GB
vCPU 4
TheHive version / git hash strangebee/thehive:latest
Package Type Docker
Database Cassandra
Index type Elasticsearch
Attachments storage GKE pod storage
Browser type & version N/A

Question

I've deployed thehive with https://docs.strangebee.com/thehive/setup/installation/kubernetes.yml on GKE 1.20.15 cluster. TheHive pod is failing with the Status CrashLoopBackOff.

Not sure how to resolve this issue. Thanks in advance for any help on this.

[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 2
[info] o.j.d.Backend [|] Configuring total store cache size: 123039304
[info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2022-05-03T22:20:16.507207Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@7d5a4841
[info] o.t.s.j.JanusDatabase [|] Full-text index is available (elasticsearch:[elasticsearch]) cluster
[info] a.c.s.ClusterSingletonManager [1daa451e28218b57|2d8c5ff460a2e718] ClusterSingletonManager state change [Start -> Younger]
[error] o.t.t.TheHiveStarter [|] TheHive startup failure
java.util.concurrent.TimeoutException: Future timed out after [10 seconds]
.
.
[info] a.a.CoordinatedShutdown [|] Running CoordinatedShutdown with reason [ApplicationShutdownReason]
Exception in thread "main" java.util.concurrent.TimeoutException: Future timed out after [10 seconds]
.
.
    at org.thp.thehive.TheHiveStarter.main(TheHiveStarter.scala)
[info] o.t.s.m.Database [|] Closing database
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.139.4:42271] - Marked address [akka://application@10.36.139.4:42271] as [Leaving]
[info] a.c.s.ClusterSingletonManager [|] Exited [akka://application@10.36.139.4:42271].
[info] a.c.s.ClusterSingletonManager [|] Exited [akka://application@10.36.139.4:42271].
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.139.4:42271] - Leader is moving node [akka://application@10.36.139.4:42271] to [Exiting]
[info] a.c.s.SplitBrainResolver [|] This node is not the leader any more and not responsible for taking SBR decisions.
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.139.4:42271] - Exiting completed
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.139.4:42271] - Shutting down...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.139.4:42271] - Successfully shut down
[info] a.c.s.ClusterSingletonManager [|] Self removed, stopping ClusterSingletonManager
[info] a.c.s.ClusterSingletonManager [|] Self removed, stopping ClusterSingletonManager
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Shutting down remote daemon.
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remote daemon shut down; proceeding with flushing remote transports.
[warn] a.s.Materializer [|] [outbound connection to [akka://application@10.36.140.5:39983], control stream] Upstream failed, cause: StreamTcpException: The connection has been aborted
[warn] a.s.Materializer [|] [outbound connection to [akka://application@10.36.140.5:39983], message stream] Upstream failed, cause: StreamTcpException: The connection has been aborted
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remoting shut down.
$ kubectl get deployments
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
thehive   1/2     2            1           4h14m
$ k get pods
NAME                       READY   STATUS             RESTARTS   AGE
cassandra-0                1/1     Running            0          4h15m
elasticsearch-0            1/1     Running            0          4h15m
minio-0                    1/1     Running            0          4h15m
thehive-666975667f-jg69p   0/1     CrashLoopBackOff   30         143m
thehive-666975667f-sdw99   1/1     Running            0          4h14m
vdebergue commented 2 years ago

Hello, Thanks for using TheHive. Do you have more stacktrace / logs to know which Future is timing out ?

fp-dshim commented 2 years ago

I am using https://docs.strangebee.com/thehive/setup/installation/kubernetes.yml I've deleted the deployment, and recreated it. Now the error is:

Using cassandra address = 10.36.168.112
Using elasticsearch address = elasticsearch with index thehive
Using S3 http://minio:9000/ bucket=thehive
Add Cortex cortex0: http://cortex:9001/
Using Kubernetes with pod label selector 'app=thehive'
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
13:54:50.206 [main] DEBUG oshi.util.FileUtil - Reading file /proc/stat
13:54:50.381 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
13:54:50.381 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
13:54:50.382 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
13:54:50.382 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
13:54:50.383 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
13:54:50.463 [main] DEBUG oshi.util.FileUtil - Reading file /proc/cpuinfo
13:54:50.487 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
13:54:50.487 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
13:54:50.488 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
13:54:50.488 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
13:54:50.488 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.FrontendModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.EnterpriseModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.TheHiveModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v0.TheHiveModuleV0
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v1.TheHiveModuleV1
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.dav.TheHiveFSModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.connector.cortex.CortexModule
[info] a.e.s.Slf4jLogger [|] Slf4jLogger started
[info] a.r.a.ArteryTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] with UID [6508679973264181526]
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Starting up, Akka version [2.6.18] ...
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Registered cluster JMX MBean [akka:type=Cluster]
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Started up successfully
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
[info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://[application@10.36.140.9:37477#6508679973264181526](http://application@10.36.140.9:37477/#6508679973264181526)], selfDc [default].
[info] a.m.c.b.ClusterBootstrap [|] ClusterBootstrap loaded through 'akka.extensions' auto starting management and bootstrap.
[info] a.m.i.HealthChecksImpl [|] Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
[info] a.m.i.HealthChecksImpl [|] Loading liveness checks []
[info] a.m.s.AkkaManagement [|] Binding Akka Management (HTTP) endpoint to: [10.36.140.9:8558](http://10.36.140.9:8558/)
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterHttpManagementRouteProvider
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterBootstrap
[info] a.m.c.b.ClusterBootstrap [|] Using self contact point address: http://10.36.140.9:8558/
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for HealthCheckRoutes
[info] a.m.c.b.ClusterBootstrap [|] Initiating bootstrap procedure using kubernetes-api method...
[info] a.m.c.b.ClusterBootstrap [|] Bootstrap using `akka.discovery` method: kubernetes-api
[info] a.m.s.AkkaManagement [|] Bound Akka Management (HTTP) endpoint to: [10.36.140.9:8558](http://10.36.140.9:8558/)
[info] a.m.c.b.i.BootstrapCoordinator [|] Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-137-9.thehive.pod.cluster.local,None,Some(/[10.36.137.9](http://10.36.137.9/))), ResolvedTarget(10-36-140-9.thehive.pod.cluster.local,None,Some(/[10.36.140.9](http://10.36.140.9/)))], filtered to [10-36-140-9.thehive.pod.cluster.local:0, 10-36-137-9.thehive.pod.cluster.local:0]
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)] returned [0] seed-nodes []
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [3ca620ea1f7c5992|416f39279c40fd85] Bootstrap request from [10.36.140.9:35370](http://10.36.140.9:35370/): Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] returned [0] seed-nodes []
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [3d55af720c469eb7|b2f4df3c63c3f463] Bootstrap request from [10.36.137.9:47774](http://10.36.137.9:47774/): Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-137-9.thehive.pod.cluster.local,None,Some(/[10.36.137.9](http://10.36.137.9/))), ResolvedTarget(10-36-140-9.thehive.pod.cluster.local,None,Some(/[10.36.140.9](http://10.36.140.9/)))], filtered to [10-36-140-9.thehive.pod.cluster.local:0, 10-36-137-9.thehive.pod.cluster.local:0]
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)] returned [0] seed-nodes []
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [c17699da2f847eb2|9aa92381ecc086d2] Bootstrap request from [10.36.140.9:35370](http://10.36.140.9:35370/): Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-137-9.thehive.pod.cluster.local,None,Some(/[10.36.137.9](http://10.36.137.9/))), ResolvedTarget(10-36-140-9.thehive.pod.cluster.local,None,Some(/[10.36.140.9](http://10.36.140.9/)))], filtered to [10-36-140-9.thehive.pod.cluster.local:0, 10-36-137-9.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [a0998285bae12ec7|05d8c4597c9a1906] Bootstrap request from [10.36.140.9:35370](http://10.36.140.9:35370/): Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)] returned [1] seed-nodes [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)]
[info] a.m.c.b.i.BootstrapCoordinator [|] Joining [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] to existing cluster [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)]
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Received InitJoinAck message from [Actor[akka://[application@10.36.137.9:34351/system/cluster/core/daemon#2068852840](http://application@10.36.137.9:34351/system/cluster/core/daemon#2068852840)]] to [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)]
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Welcome from [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://[application@10.36.137.9:34351/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader](http://application@10.36.137.9:34351/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader)]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]

[error] o.t.s.m.Database [|] ***********************************************************************
[error] o.t.s.m.Database [|] * Database initialisation has failed. Restart application to retry it *
[error] o.t.s.m.Database [|] ***********************************************************************
[error] o.t.t.TheHiveStarter [|] TheHive startup failure
org.thp.scalligraph.ScalligraphApplicationImpl$InitialisationFailure: Database initialisation failure
at org.thp.scalligraph.ScalligraphApplicationImpl.initCheck(ScalligraphApplication.scala:144)
at org.thp.scalligraph.ScalligraphApplicationImpl.database$lzycompute(ScalligraphApplication.scala:217)
at org.thp.scalligraph.ScalligraphApplicationImpl.database(ScalligraphApplication.scala:217)
at org.thp.thehive.enterprise.EnterpriseModule.init(EnterpriseModule.scala:71)
at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1(ScalligraphApplication.scala:269)
at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1$adapted(ScalligraphApplication.scala:269)
at scala.collection.immutable.List.foreach(List.scala:333)
at org.thp.scalligraph.ScalligraphApplicationImpl.initModules(ScalligraphApplication.scala:269)
at org.thp.thehive.TheHiveStarter$.startService(TheHiveStarter.scala:40)
at org.thp.thehive.TheHiveStarter$.delayedEndpoint$org$thp$thehive$TheHiveStarter$1(TheHiveStarter.scala:19)
Caused by: org.thp.scalligraph.InternalError: Database initialisation failure
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:194)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:698)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
[info] a.a.CoordinatedShutdown [|] Running CoordinatedShutdown with reason [ApplicationShutdownReason]
Exception in thread "main" org.thp.scalligraph.ScalligraphApplicationImpl$InitialisationFailure: Database initialisation failure
at org.thp.scalligraph.ScalligraphApplicationImpl.initCheck(ScalligraphApplication.scala:144)
at org.thp.scalligraph.ScalligraphApplicationImpl.database$lzycompute(ScalligraphApplication.scala:217)
at org.thp.scalligraph.ScalligraphApplicationImpl.database(ScalligraphApplication.scala:217)
at org.thp.thehive.enterprise.EnterpriseModule.init(EnterpriseModule.scala:71)
at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1(ScalligraphApplication.scala:269)
at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1$adapted(ScalligraphApplication.scala:269)
at scala.collection.immutable.List.foreach(List.scala:333)
at org.thp.scalligraph.ScalligraphApplicationImpl.initModules(ScalligraphApplication.scala:269)
at org.thp.thehive.TheHiveStarter$.startService(TheHiveStarter.scala:40)
at org.thp.thehive.TheHiveStarter$.delayedEndpoint$org$thp$thehive$TheHiveStarter$1(TheHiveStarter.scala:19)
at org.thp.thehive.TheHiveStarter$delayedInit$body.apply(TheHiveStarter.scala:15)
at scala.Function0.apply$mcV$sp(Function0.scala:39)
at scala.Function0.apply$mcV$sp$(Function0.scala:39)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
at scala.App.$anonfun$main$1(App.scala:76)
at scala.App.$anonfun$main$1$adapted(App.scala:76)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
at scala.collection.AbstractIterable.foreach(Iterable.scala:926)
at scala.App.main(App.scala:76)
at scala.App.main$(App.scala:74)
at org.thp.thehive.TheHiveStarter$.main(TheHiveStarter.scala:15)
at org.thp.thehive.TheHiveStarter.main(TheHiveStarter.scala)
Caused by: org.thp.scalligraph.InternalError: Database initialisation failure
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:194)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:698)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Marked address [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] as [Leaving]
[info] a.c.s.ClusterSingletonManager [|] Exited [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)].
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Exiting completed
[info] a.c.s.ClusterSingletonManager [|] Younger observed OldestChanged: [Some(akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)) -> None]
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Shutting down...
[info] a.c.Cluster [|] Cluster Node [akka://[application@10.36.140.9:37477](http://application@10.36.140.9:37477/)] - Successfully shut down
[info] a.c.s.ClusterSingletonManager [|] Self removed, stopping ClusterSingletonManager
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Shutting down remote daemon.
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remote daemon shut down; proceeding with flushing remote transports.
[warn] a.s.Materializer [|] [outbound connection to [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)], control stream] Upstream failed, cause: StreamTcpException: The connection has been aborted
[warn] a.s.Materializer [|] [outbound connection to [akka://[application@10.36.137.9:34351](http://application@10.36.137.9:34351/)], message stream] Upstream failed, cause: StreamTcpException: The connection has been aborted
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remoting shut down.
vdebergue commented 2 years ago

It seems like the application cannot create its database on cassandra.

Could you start the deployment with replicas = 0 to make sure that cassandra has time to initialize correctly ?

Then use replicas = 1: like this if the database schema creation fails, the error logs will be on one node.

If TheHive starts correctly, then increase the replicas to 2.

fp-dshim commented 2 years ago

okay. I've set replicas = 1 and deleted the existing thehive pod that was failing. New pod was created for the deployment. Here's the log:

Using cassandra address = 10.36.168.112
Using elasticsearch address = elasticsearch with index thehive
Using S3 http://minio:9000 bucket=thehive
Add Cortex cortex0: http://cortex:9001
Using Kubernetes with pod label selector 'app=thehive'
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
13:00:11.397 [main] DEBUG oshi.util.FileUtil - Reading file /proc/stat
13:00:11.536 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
13:00:11.537 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
13:00:11.537 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
13:00:11.537 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
13:00:11.538 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
13:00:11.605 [main] DEBUG oshi.util.FileUtil - Reading file /proc/cpuinfo
13:00:11.628 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
13:00:11.629 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
13:00:11.629 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
13:00:11.630 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
13:00:11.630 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.FrontendModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.EnterpriseModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.TheHiveModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v0.TheHiveModuleV0
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v1.TheHiveModuleV1
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.dav.TheHiveFSModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.connector.cortex.CortexModule
[info] a.e.s.Slf4jLogger [|] Slf4jLogger started
[info] a.r.a.ArteryTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@10.36.130.18:40473] with UID [-4145777115322323565]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Starting up, Akka version [2.6.18] ...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Registered cluster JMX MBean [akka:type=Cluster]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Started up successfully
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
[info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://application@10.36.130.18:40473#-4145777115322323565], selfDc [default].
[info] a.m.c.b.ClusterBootstrap [|] ClusterBootstrap loaded through 'akka.extensions' auto starting management and bootstrap.
[info] a.m.i.HealthChecksImpl [|] Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
[info] a.m.i.HealthChecksImpl [|] Loading liveness checks []
[info] a.m.s.AkkaManagement [|] Binding Akka Management (HTTP) endpoint to: 10.36.130.18:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterHttpManagementRouteProvider
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterBootstrap
[info] a.m.c.b.ClusterBootstrap [|] Using self contact point address: http://10.36.130.18:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for HealthCheckRoutes
[info] a.m.c.b.ClusterBootstrap [|] Initiating bootstrap procedure using kubernetes-api method...
[info] a.m.c.b.ClusterBootstrap [|] Bootstrap using `akka.discovery` method: kubernetes-api
[info] a.m.s.AkkaManagement [|] Bound Akka Management (HTTP) endpoint to: 10.36.130.18:8558
[info] a.m.c.b.i.BootstrapCoordinator [|] Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-18.thehive.pod.cluster.local,None,Some(/10.36.130.18))], filtered to [10-36-130-18.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [8fbf500a0e2dc310|e6f0f0e73b0514c6] Bootstrap request from 10.36.130.18:42980: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.18:40473] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-18.thehive.pod.cluster.local,None,Some(/10.36.130.18))], filtered to [10-36-130-18.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [d0be8f2b021a1710|8bcb027c1a842841] Bootstrap request from 10.36.130.18:42980: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.18:40473] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-18.thehive.pod.cluster.local,None,Some(/10.36.130.18))], filtered to [10-36-130-18.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [75e928189a21afc5|e32fa55cbdae5c7a] Bootstrap request from 10.36.130.18:42980: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.18:40473] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-18.thehive.pod.cluster.local,None,Some(/10.36.130.18))], filtered to [10-36-130-18.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [0174a29e3f63a601|efd61657e07c9078] Bootstrap request from 10.36.130.18:42980: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.18:40473] returned [0] seed-nodes []
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [3364dd16e6a0fbbf|f07efce594b5be1d] Bootstrap request from 10.36.130.18:42980: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.18:40473] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-18.thehive.pod.cluster.local,None,Some(/10.36.130.18))], filtered to [10-36-130-18.thehive.pod.cluster.local:0]
[info] a.m.c.b.i.BootstrapCoordinator [|] Initiating new cluster, self-joining [akka://application@10.36.130.18:40473]. Other nodes are expected to locate this cluster via continued contact-point probing.
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Node [akka://application@10.36.130.18:40473] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - is the new leader among reachable nodes (more leaders may exist)
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Leader is moving node [akka://application@10.36.130.18:40473] to [Up]
[info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] o.t.s.m.Database [|] Initialising database ...
[info] o.t.s.j.JanusDatabase [|] Loading database cassandra in cassandra
[info] c.d.o.d.i.c.DefaultMavenCoordinates [|] DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.13.0
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 9 OK out of 11 attempts in PT0.031S
[info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=0a2482121-thehive-666975667f-xrwzj1
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.d.Backend [|] Configuring index [search]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/opt/thehive/lib/org.codehaus.groovy.groovy-2.5.14-indy.jar) to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 2
[info] o.j.d.Backend [|] Configuring total store cache size: 118235836
[info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2022-05-06T13:00:27.753456Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@3910e8ee
[info] o.t.s.j.JanusDatabase [|] Full-text index is available (elasticsearch:[elasticsearch]) cluster
[info] o.r.Reflections [|] Reflections took 105 ms to scan 1 urls, producing 57 keys and 231 values
[info] o.r.Reflections [|] Reflections took 395 ms to scan 1 urls, producing 282 keys and 3027 values
[info] o.r.Reflections [|] Reflections took 50 ms to scan 1 urls, producing 57 keys and 298 values
[info] o.t.s.m.Database [|] Creating database schema
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (35): Update graph: Add manageComment permission to org-admin and analyst profiles
[error] o.t.s.m.Database [|] ***********************************************************************
[error] o.t.s.m.Database [|] * Database initialisation has failed. Restart application to retry it *
[error] o.t.s.m.Database [|] ***********************************************************************
[error] o.t.t.TheHiveStarter [|] TheHive startup failure
org.thp.scalligraph.ScalligraphApplicationImpl$InitialisationFailure: Database initialisation failure
    at org.thp.scalligraph.ScalligraphApplicationImpl.initCheck(ScalligraphApplication.scala:144)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database$lzycompute(ScalligraphApplication.scala:217)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database(ScalligraphApplication.scala:217)
    at org.thp.thehive.enterprise.EnterpriseModule.init(EnterpriseModule.scala:71)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1(ScalligraphApplication.scala:269)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1$adapted(ScalligraphApplication.scala:269)
    at scala.collection.immutable.List.foreach(List.scala:333)
    at org.thp.scalligraph.ScalligraphApplicationImpl.initModules(ScalligraphApplication.scala:269)
    at org.thp.thehive.TheHiveStarter$.startService(TheHiveStarter.scala:40)
    at org.thp.thehive.TheHiveStarter$.delayedEndpoint$org$thp$thehive$TheHiveStarter$1(TheHiveStarter.scala:19)
Caused by: org.thp.scalligraph.InternalError: Database initialisation failure
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$16(JanusDatabaseProvider.scala:176)
    at scala.util.Failure.fold(Try.scala:247)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:178)
    at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
    at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
Caused by: java.util.NoSuchElementException: null
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.$anonfun$next$1(TraversalOps.scala:78)
    at scala.Option.getOrElse(Option.scala:201)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.next(TraversalOps.scala:78)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.$anonfun$head$1(TraversalOps.scala:129)
    at kamon.ContextStorage.runWithContext(ContextStorage.scala:67)
    at kamon.ContextStorage.runWithContext$(ContextStorage.scala:64)
    at kamon.Kamon$.runWithContext(Kamon.scala:19)
    at kamon.ContextStorage.runWithContextEntry(ContextStorage.scala:79)
    at kamon.ContextStorage.runWithContextEntry$(ContextStorage.scala:78)
    at kamon.Kamon$.runWithContextEntry(Kamon.scala:19)
[info] a.a.CoordinatedShutdown [|] Running CoordinatedShutdown with reason [ApplicationShutdownReason]
Exception in thread "main" org.thp.scalligraph.ScalligraphApplicationImpl$InitialisationFailure: Database initialisation failure
    at org.thp.scalligraph.ScalligraphApplicationImpl.initCheck(ScalligraphApplication.scala:144)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database$lzycompute(ScalligraphApplication.scala:217)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database(ScalligraphApplication.scala:217)
    at org.thp.thehive.enterprise.EnterpriseModule.init(EnterpriseModule.scala:71)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1(ScalligraphApplication.scala:269)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1$adapted(ScalligraphApplication.scala:269)
    at scala.collection.immutable.List.foreach(List.scala:333)
    at org.thp.scalligraph.ScalligraphApplicationImpl.initModules(ScalligraphApplication.scala:269)
    at org.thp.thehive.TheHiveStarter$.startService(TheHiveStarter.scala:40)
    at org.thp.thehive.TheHiveStarter$.delayedEndpoint$org$thp$thehive$TheHiveStarter$1(TheHiveStarter.scala:19)
    at org.thp.thehive.TheHiveStarter$delayedInit$body.apply(TheHiveStarter.scala:15)
    at scala.Function0.apply$mcV$sp(Function0.scala:39)
    at scala.Function0.apply$mcV$sp$(Function0.scala:39)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
    at scala.App.$anonfun$main$1(App.scala:76)
    at scala.App.$anonfun$main$1$adapted(App.scala:76)
    at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
    at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:926)
    at scala.App.main(App.scala:76)
    at scala.App.main$(App.scala:74)
    at org.thp.thehive.TheHiveStarter$.main(TheHiveStarter.scala:15)
    at org.thp.thehive.TheHiveStarter.main(TheHiveStarter.scala)
Caused by: org.thp.scalligraph.InternalError: Database initialisation failure
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$16(JanusDatabaseProvider.scala:176)
    at scala.util.Failure.fold(Try.scala:247)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:178)
    at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
    at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
    at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:698)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.util.NoSuchElementException
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.$anonfun$next$1(TraversalOps.scala:78)
    at scala.Option.getOrElse(Option.scala:201)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.next(TraversalOps.scala:78)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.$anonfun$head$1(TraversalOps.scala:129)
    at kamon.ContextStorage.runWithContext(ContextStorage.scala:67)
    at kamon.ContextStorage.runWithContext$(ContextStorage.scala:64)
    at kamon.Kamon$.runWithContext(Kamon.scala:19)
    at kamon.ContextStorage.runWithContextEntry(ContextStorage.scala:79)
    at kamon.ContextStorage.runWithContextEntry$(ContextStorage.scala:78)
    at kamon.Kamon$.runWithContextEntry(Kamon.scala:19)
    at kamon.ContextStorage.runWithSpan(ContextStorage.scala:121)
    at kamon.ContextStorage.runWithSpan$(ContextStorage.scala:119)
    at kamon.Kamon$.runWithSpan(Kamon.scala:19)
    at org.thp.scalligraph.utils.Tracing$.span(Tracing.scala:32)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.head(TraversalOps.scala:127)
    at org.thp.scalligraph.models.BaseDatabase.getDate$1(Database.scala:276)
    at org.thp.scalligraph.models.BaseDatabase.$anonfun$pagedTraversalIds$5(Database.scala:286)
    at org.thp.scalligraph.janus.JanusDatabase.roTransaction(JanusDatabase.scala:197)
    at org.thp.scalligraph.models.BaseDatabase.$anonfun$pagedTraversalIds$4(Database.scala:280)
    at scala.collection.Iterator$UnfoldIterator.hasNext(Iterator.scala:1272)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:594)
    at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:816)
    at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
    at scala.collection.IterableOnceOps.find(IterableOnce.scala:620)
    at scala.collection.IterableOnceOps.find$(IterableOnce.scala:618)
    at scala.collection.AbstractIterator.find(Iterator.scala:1293)
    at org.thp.scalligraph.models.UpdateGraphVertices.execute(Operation.scala:45)
    at org.thp.scalligraph.models.Operations.$anonfun$execute$19(Operation.scala:156)
    at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:169)
    at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:165)
    at scala.collection.immutable.List.foldLeft(List.scala:79)
    at org.thp.scalligraph.models.Operations.execute(Operation.scala:152)
    at org.thp.scalligraph.models.UpdatableSchema.update(Schema.scala:20)
    at org.thp.scalligraph.models.UpdatableSchema.update$(Schema.scala:19)
    at org.thp.thehive.enterprise.models.EnterpriseSchemaDefinition$.update(EnterpriseSchemaDefinition.scala:20)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$15(JanusDatabaseProvider.scala:159)
    at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:25)
    at scala.collection.IterableOnceOps.foldLeft(IterableOnce.scala:646)
    at scala.collection.IterableOnceOps.foldLeft$(IterableOnce.scala:642)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1293)
    at org.thp.scalligraph.package$RichSeq.toTry(package.scala:24)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$14(JanusDatabaseProvider.scala:159)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$14$adapted(JanusDatabaseProvider.scala:159)
    at scala.util.Success.flatMap(Try.scala:258)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:159)
    ... 14 more
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Marked address [akka://application@10.36.130.18:40473] as [Leaving]
[info] a.c.s.ClusterSingletonManager [|] Exited [akka://application@10.36.130.18:40473].
[info] a.c.s.ClusterSingletonManager [|] Oldest observed OldestChanged: [akka://application@10.36.130.18:40473 -> None]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Oldest -> WasOldest]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Leader is moving node [akka://application@10.36.130.18:40473] to [Exiting]
[info] a.c.s.ClusterSingletonManager [|] Singleton manager stopping singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [WasOldest -> Stopping]
[info] a.c.s.ClusterSingletonManager [|] Singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader] was terminated
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Exiting completed
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Shutting down...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.18:40473] - Successfully shut down
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Shutting down remote daemon.
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remote daemon shut down; proceeding with flushing remote transports.
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remoting shut down.
fp-dshim commented 2 years ago

I've also tried setting thehive deployment with replicas = 0 and made sure that the pod was removed. And then set the deployment with replicas = 1. I am getting the same error.

fp-dshim commented 2 years ago

I'd be happy to zoom and do a screenshare.

vdebergue commented 2 years ago

I didn't manage to reproduce the problem locally on a new kubernetes cluster but from the stacktrace it seems like the issue comes from some invalid data in the database:

During the schema creation, TheHive tries to access the property _createdAt on one data but it seems that this property does not exist in your db. This property should be automatically created by TheHive for all data.

Could you try deleting cassandra data ? (remove the pod if you started it with an emptyDir) and then restart TheHive

fp-dshim commented 2 years ago

Ok thanks. I'll try that. I'll set thehive replica to 0, then delete the cassandra-0. Once cassandra-0 is recreated, I'll set thehive replica to 1. I've started with a brand new K8s namespace with https://docs.strangebee.com/thehive/setup/installation/kubernetes.yml No other changes were made.

fp-dshim commented 2 years ago

I am getting the same error:

Using cassandra address = 10.36.168.112
Using elasticsearch address = elasticsearch with index thehive
Using S3 http://minio:9000 bucket=thehive
Add Cortex cortex0: http://cortex:9001
Using Kubernetes with pod label selector 'app=thehive'
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
14:01:56.131 [main] DEBUG oshi.util.FileUtil - Reading file /proc/stat
14:01:56.324 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
14:01:56.325 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
14:01:56.325 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
14:01:56.325 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
14:01:56.326 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
14:01:56.401 [main] DEBUG oshi.util.FileUtil - Reading file /proc/cpuinfo
14:01:56.425 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
14:01:56.425 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
14:01:56.426 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
14:01:56.426 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
14:01:56.426 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.FrontendModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.EnterpriseModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.TheHiveModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v0.TheHiveModuleV0
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v1.TheHiveModuleV1
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.dav.TheHiveFSModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.connector.cortex.CortexModule
[info] a.e.s.Slf4jLogger [|] Slf4jLogger started
[info] a.r.a.ArteryTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@10.36.140.35:37053] with UID [1107512433779586378]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Starting up, Akka version [2.6.18] ...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Registered cluster JMX MBean [akka:type=Cluster]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Started up successfully
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
[info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://application@10.36.140.35:37053#1107512433779586378], selfDc [default].
[info] a.m.c.b.ClusterBootstrap [|] ClusterBootstrap loaded through 'akka.extensions' auto starting management and bootstrap.
[info] a.m.i.HealthChecksImpl [|] Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
[info] a.m.i.HealthChecksImpl [|] Loading liveness checks []
[info] a.m.s.AkkaManagement [|] Binding Akka Management (HTTP) endpoint to: 10.36.140.35:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterHttpManagementRouteProvider
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterBootstrap
[info] a.m.c.b.ClusterBootstrap [|] Using self contact point address: http://10.36.140.35:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for HealthCheckRoutes
[info] a.m.c.b.ClusterBootstrap [|] Initiating bootstrap procedure using kubernetes-api method...
[info] a.m.c.b.ClusterBootstrap [|] Bootstrap using `akka.discovery` method: kubernetes-api
[info] a.m.s.AkkaManagement [|] Bound Akka Management (HTTP) endpoint to: 10.36.140.35:8558
[info] a.m.c.b.i.BootstrapCoordinator [|] Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-140-35.thehive.pod.cluster.local,None,Some(/10.36.140.35))], filtered to [10-36-140-35.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [0c266ee54c50ea26|2fa473c6344fe039] Bootstrap request from 10.36.140.35:60220: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.140.35:37053] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-140-35.thehive.pod.cluster.local,None,Some(/10.36.140.35))], filtered to [10-36-140-35.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [c6367630e5f28e35|06601707ae57cbbe] Bootstrap request from 10.36.140.35:60220: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.140.35:37053] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-140-35.thehive.pod.cluster.local,None,Some(/10.36.140.35))], filtered to [10-36-140-35.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [b5580bafe009ecf4|51d5538228db7549] Bootstrap request from 10.36.140.35:60220: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.140.35:37053] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-140-35.thehive.pod.cluster.local,None,Some(/10.36.140.35))], filtered to [10-36-140-35.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [737d5fac0e772124|fe7c5b31c1bee6fc] Bootstrap request from 10.36.140.35:60220: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.140.35:37053] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-140-35.thehive.pod.cluster.local,None,Some(/10.36.140.35))], filtered to [10-36-140-35.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [15f446abb5564674|e5dff5302e2c7022] Bootstrap request from 10.36.140.35:60220: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.140.35:37053] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-140-35.thehive.pod.cluster.local,None,Some(/10.36.140.35))], filtered to [10-36-140-35.thehive.pod.cluster.local:0]
[info] a.m.c.b.i.BootstrapCoordinator [|] Initiating new cluster, self-joining [akka://application@10.36.140.35:37053]. Other nodes are expected to locate this cluster via continued contact-point probing.
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Node [akka://application@10.36.140.35:37053] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - is the new leader among reachable nodes (more leaders may exist)
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Leader is moving node [akka://application@10.36.140.35:37053] to [Up]
[info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] o.t.s.m.Database [|] Initialising database ...
[info] o.t.s.j.JanusDatabase [|] Loading database cassandra in cassandra
[info] c.d.o.d.i.c.DefaultMavenCoordinates [|] DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.13.0
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 9 OK out of 11 attempts in PT0.038S
[info] o.j.d.c.b.ReadConfigurationBuilder [|] Set default timestamp provider MICRO
[info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=0a248c231-thehive-666975667f-kgs7q1
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.d.Backend [|] Configuring index [search]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/opt/thehive/lib/org.codehaus.groovy.groovy-2.5.14-indy.jar) to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 2
[info] o.j.d.Backend [|] Configuring total store cache size: 123128152
[info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2022-05-06T14:02:22.525241Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@59593b1f
[info] o.t.s.j.JanusDatabase [|] Full-text index is available (elasticsearch:[elasticsearch]) cluster
[info] o.r.Reflections [|] Reflections took 87 ms to scan 1 urls, producing 57 keys and 231 values
[info] o.r.Reflections [|] Reflections took 391 ms to scan 1 urls, producing 282 keys and 3027 values
[info] o.r.Reflections [|] Reflections took 24 ms to scan 1 urls, producing 57 keys and 298 values
[info] o.t.s.m.Database [|] Creating database schema
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (1): Create initial values
[info] o.t.s.m.Operations [909a8661374618f4|d6f9fabceceddf27] Adding initial values for GDPRDummy
[info] o.t.s.m.Operations [7bb69f33b9caeda5|0a9bd91a382c674c] Adding initial values for Branding
[info] o.t.s.m.Operations [9e6f850bafac8008|230e892612d93ccb] Adding initial values for ResetPasswordToken
[info] o.t.s.m.Operations [ecb32d52ba0f03af|e33f633f921d4ef6] Adding initial values for LicenseData
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (2): Update graph: Add taskRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (3): Update graph: Add observableRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (4): Update graph: Add taskRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (5): Update graph: Add observableRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (6): Update graph: Add linkType in organisation edges
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (7): Update graph: Add vertex for each case custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (8): Update graph: Remove edge of case custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (9): Update graph: Add vertex for each alert custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (10): Update graph: Remove edge of alert custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (11): Update graph: Add vertex for each case template custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (12): Update graph: Remove edge of case template custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (13): Remove property order from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (14): Remove property stringValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (15): Remove property booleanValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (16): Remove property integerValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (17): Remove property floatValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (18): Remove property dateValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (19): Remove property order from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (20): Remove property stringValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (21): Remove property booleanValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (22): Remove property integerValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (23): Remove property floatValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (24): Remove property dateValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (25): Remove property order from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (26): Remove property stringValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (27): Remove property booleanValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (28): Remove property integerValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (29): Remove property floatValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (30): Remove property dateValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (31): Remove edge label CaseCustomField
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (32): Remove edge label AlertCustomField
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (33): Remove edge label CaseTemplateCustomField
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (34): Update graph: Add default group to custom field
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (35): Update graph: Add manageComment permission to org-admin and analyst profiles
[error] o.t.s.m.Database [|] ***********************************************************************
[error] o.t.s.m.Database [|] * Database initialisation has failed. Restart application to retry it *
[error] o.t.s.m.Database [|] ***********************************************************************
[error] o.t.t.TheHiveStarter [|] TheHive startup failure
org.thp.scalligraph.ScalligraphApplicationImpl$InitialisationFailure: Database initialisation failure
    at org.thp.scalligraph.ScalligraphApplicationImpl.initCheck(ScalligraphApplication.scala:144)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database$lzycompute(ScalligraphApplication.scala:217)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database(ScalligraphApplication.scala:217)
    at org.thp.thehive.enterprise.EnterpriseModule.init(EnterpriseModule.scala:71)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1(ScalligraphApplication.scala:269)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1$adapted(ScalligraphApplication.scala:269)
    at scala.collection.immutable.List.foreach(List.scala:333)
    at org.thp.scalligraph.ScalligraphApplicationImpl.initModules(ScalligraphApplication.scala:269)
    at org.thp.thehive.TheHiveStarter$.startService(TheHiveStarter.scala:40)
    at org.thp.thehive.TheHiveStarter$.delayedEndpoint$org$thp$thehive$TheHiveStarter$1(TheHiveStarter.scala:19)
Caused by: org.thp.scalligraph.InternalError: Database initialisation failure
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$16(JanusDatabaseProvider.scala:176)
    at scala.util.Failure.fold(Try.scala:247)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:178)
    at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
    at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
Caused by: java.util.NoSuchElementException: null
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.$anonfun$next$1(TraversalOps.scala:78)
    at scala.Option.getOrElse(Option.scala:201)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.next(TraversalOps.scala:78)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.$anonfun$head$1(TraversalOps.scala:129)
    at kamon.ContextStorage.runWithContext(ContextStorage.scala:67)
    at kamon.ContextStorage.runWithContext$(ContextStorage.scala:64)
    at kamon.Kamon$.runWithContext(Kamon.scala:19)
    at kamon.ContextStorage.runWithContextEntry(ContextStorage.scala:79)
    at kamon.ContextStorage.runWithContextEntry$(ContextStorage.scala:78)
    at kamon.Kamon$.runWithContextEntry(Kamon.scala:19)
[info] a.a.CoordinatedShutdown [|] Running CoordinatedShutdown with reason [ApplicationShutdownReason]
Exception in thread "main" org.thp.scalligraph.ScalligraphApplicationImpl$InitialisationFailure: Database initialisation failure
    at org.thp.scalligraph.ScalligraphApplicationImpl.initCheck(ScalligraphApplication.scala:144)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database$lzycompute(ScalligraphApplication.scala:217)
    at org.thp.scalligraph.ScalligraphApplicationImpl.database(ScalligraphApplication.scala:217)
    at org.thp.thehive.enterprise.EnterpriseModule.init(EnterpriseModule.scala:71)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1(ScalligraphApplication.scala:269)
    at org.thp.scalligraph.ScalligraphApplicationImpl.$anonfun$initModules$1$adapted(ScalligraphApplication.scala:269)
    at scala.collection.immutable.List.foreach(List.scala:333)
    at org.thp.scalligraph.ScalligraphApplicationImpl.initModules(ScalligraphApplication.scala:269)
    at org.thp.thehive.TheHiveStarter$.startService(TheHiveStarter.scala:40)
    at org.thp.thehive.TheHiveStarter$.delayedEndpoint$org$thp$thehive$TheHiveStarter$1(TheHiveStarter.scala:19)
    at org.thp.thehive.TheHiveStarter$delayedInit$body.apply(TheHiveStarter.scala:15)
    at scala.Function0.apply$mcV$sp(Function0.scala:39)
    at scala.Function0.apply$mcV$sp$(Function0.scala:39)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
    at scala.App.$anonfun$main$1(App.scala:76)
    at scala.App.$anonfun$main$1$adapted(App.scala:76)
    at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
    at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:926)
    at scala.App.main(App.scala:76)
    at scala.App.main$(App.scala:74)
    at org.thp.thehive.TheHiveStarter$.main(TheHiveStarter.scala:15)
    at org.thp.thehive.TheHiveStarter.main(TheHiveStarter.scala)
Caused by: org.thp.scalligraph.InternalError: Database initialisation failure
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$16(JanusDatabaseProvider.scala:176)
    at scala.util.Failure.fold(Try.scala:247)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:178)
    at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
    at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
    at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:698)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.util.NoSuchElementException
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.$anonfun$next$1(TraversalOps.scala:78)
    at scala.Option.getOrElse(Option.scala:201)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.next(TraversalOps.scala:78)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.$anonfun$head$1(TraversalOps.scala:129)
    at kamon.ContextStorage.runWithContext(ContextStorage.scala:67)
    at kamon.ContextStorage.runWithContext$(ContextStorage.scala:64)
    at kamon.Kamon$.runWithContext(Kamon.scala:19)
    at kamon.ContextStorage.runWithContextEntry(ContextStorage.scala:79)
    at kamon.ContextStorage.runWithContextEntry$(ContextStorage.scala:78)
    at kamon.Kamon$.runWithContextEntry(Kamon.scala:19)
    at kamon.ContextStorage.runWithSpan(ContextStorage.scala:121)
    at kamon.ContextStorage.runWithSpan$(ContextStorage.scala:119)
    at kamon.Kamon$.runWithSpan(Kamon.scala:19)
    at org.thp.scalligraph.utils.Tracing$.span(Tracing.scala:32)
    at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.head(TraversalOps.scala:127)
    at org.thp.scalligraph.models.BaseDatabase.getDate$1(Database.scala:276)
    at org.thp.scalligraph.models.BaseDatabase.$anonfun$pagedTraversalIds$5(Database.scala:286)
    at org.thp.scalligraph.janus.JanusDatabase.roTransaction(JanusDatabase.scala:197)
    at org.thp.scalligraph.models.BaseDatabase.$anonfun$pagedTraversalIds$4(Database.scala:280)
    at scala.collection.Iterator$UnfoldIterator.hasNext(Iterator.scala:1272)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:594)
    at scala.collection.Iterator$$anon$16.hasNext(Iterator.scala:816)
    at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
    at scala.collection.IterableOnceOps.find(IterableOnce.scala:620)
    at scala.collection.IterableOnceOps.find$(IterableOnce.scala:618)
    at scala.collection.AbstractIterator.find(Iterator.scala:1293)
    at org.thp.scalligraph.models.UpdateGraphVertices.execute(Operation.scala:45)
    at org.thp.scalligraph.models.Operations.$anonfun$execute$19(Operation.scala:156)
    at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:169)
    at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:165)
    at scala.collection.immutable.List.foldLeft(List.scala:79)
    at org.thp.scalligraph.models.Operations.execute(Operation.scala:152)
    at org.thp.scalligraph.models.UpdatableSchema.update(Schema.scala:20)
    at org.thp.scalligraph.models.UpdatableSchema.update$(Schema.scala:19)
    at org.thp.thehive.enterprise.models.EnterpriseSchemaDefinition$.update(EnterpriseSchemaDefinition.scala:20)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$15(JanusDatabaseProvider.scala:159)
    at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:25)
    at scala.collection.IterableOnceOps.foldLeft(IterableOnce.scala:646)
    at scala.collection.IterableOnceOps.foldLeft$(IterableOnce.scala:642)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1293)
    at org.thp.scalligraph.package$RichSeq.toTry(package.scala:24)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$14(JanusDatabaseProvider.scala:159)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$14$adapted(JanusDatabaseProvider.scala:159)
    at scala.util.Success.flatMap(Try.scala:258)
    at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:159)
    ... 14 more
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Marked address [akka://application@10.36.140.35:37053] as [Leaving]
[info] a.c.s.ClusterSingletonManager [|] Exited [akka://application@10.36.140.35:37053].
[info] a.c.s.ClusterSingletonManager [|] Oldest observed OldestChanged: [akka://application@10.36.140.35:37053 -> None]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Oldest -> WasOldest]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Leader is moving node [akka://application@10.36.140.35:37053] to [Exiting]
[info] a.c.s.ClusterSingletonManager [|] Singleton manager stopping singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [WasOldest -> Stopping]
[info] a.c.s.ClusterSingletonManager [|] Singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader] was terminated
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Exiting completed
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Shutting down...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.35:37053] - Successfully shut down
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Shutting down remote daemon.
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remote daemon shut down; proceeding with flushing remote transports.
[info] a.r.RemoteActorRefProvider$RemotingTerminator [|] Remoting shut down.
vdebergue commented 2 years ago

Ok last idea: try deleting the data from ES + Cassandra (as Elasticsearch is the indexing engine, it may return data that do not appear in cassandra)

fp-dshim commented 2 years ago

I've set thehive deployment to replicas = 0, deleted pods cassandra-0 elasticsearch-0. Waited a few minutes. I've set thehive deployment to replicas = 1

Using cassandra address = 10.36.168.112
Using elasticsearch address = elasticsearch with index thehive
Using S3 http://minio:9000 bucket=thehive
Add Cortex cortex0: http://cortex:9001
Using Kubernetes with pod label selector 'app=thehive'
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
14:19:41.209 [main] DEBUG oshi.util.FileUtil - Reading file /proc/stat
14:19:41.384 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
14:19:41.384 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
14:19:41.384 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
14:19:41.385 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
14:19:41.386 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
14:19:41.454 [main] DEBUG oshi.util.FileUtil - Reading file /proc/cpuinfo
14:19:41.477 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
14:19:41.477 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
14:19:41.478 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
14:19:41.478 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
14:19:41.478 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.FrontendModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.EnterpriseModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.TheHiveModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v0.TheHiveModuleV0
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v1.TheHiveModuleV1
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.dav.TheHiveFSModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.connector.cortex.CortexModule
[info] a.e.s.Slf4jLogger [|] Slf4jLogger started
[info] a.r.a.ArteryTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@10.36.130.19:33771] with UID [-1421942526853796604]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - Starting up, Akka version [2.6.18] ...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - Registered cluster JMX MBean [akka:type=Cluster]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - Started up successfully
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
[info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://application@10.36.130.19:33771#-1421942526853796604], selfDc [default].
[info] a.m.c.b.ClusterBootstrap [|] ClusterBootstrap loaded through 'akka.extensions' auto starting management and bootstrap.
[info] a.m.i.HealthChecksImpl [|] Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
[info] a.m.i.HealthChecksImpl [|] Loading liveness checks []
[info] a.m.s.AkkaManagement [|] Binding Akka Management (HTTP) endpoint to: 10.36.130.19:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterHttpManagementRouteProvider
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterBootstrap
[info] a.m.c.b.ClusterBootstrap [|] Using self contact point address: http://10.36.130.19:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for HealthCheckRoutes
[info] a.m.c.b.ClusterBootstrap [|] Initiating bootstrap procedure using kubernetes-api method...
[info] a.m.c.b.ClusterBootstrap [|] Bootstrap using `akka.discovery` method: kubernetes-api
[info] a.m.s.AkkaManagement [|] Bound Akka Management (HTTP) endpoint to: 10.36.130.19:8558
[info] a.m.c.b.i.BootstrapCoordinator [|] Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19))], filtered to [10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [69f56b243d09b355|cd5798c41ac34a16] Bootstrap request from 10.36.130.19:42632: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.19:33771] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19))], filtered to [10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [ded4edcd91e377dd|e9275b833689af2d] Bootstrap request from 10.36.130.19:42632: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.19:33771] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19))], filtered to [10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [b4b965e57725d1e1|7ca0a435dc3dc995] Bootstrap request from 10.36.130.19:42632: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.19:33771] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19))], filtered to [10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [51d012ac458da7e6|9156d9d0a2ce59ac] Bootstrap request from 10.36.130.19:42632: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.19:33771] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19))], filtered to [10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [d9fc46a1a3783651|6c1d1eb99e7637d8] Bootstrap request from 10.36.130.19:42632: Contact Point returning 0 seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.19:33771] returned [0] seed-nodes []
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19))], filtered to [10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.i.BootstrapCoordinator [|] Initiating new cluster, self-joining [akka://application@10.36.130.19:33771]. Other nodes are expected to locate this cluster via continued contact-point probing.
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - Node [akka://application@10.36.130.19:33771] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - is the new leader among reachable nodes (more leaders may exist)
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.130.19:33771] - Leader is moving node [akka://application@10.36.130.19:33771] to [Up]
[info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] o.t.s.m.Database [|] Initialising database ...
[info] o.t.s.j.JanusDatabase [|] Loading database cassandra in cassandra
[info] c.d.o.d.i.c.DefaultMavenCoordinates [|] DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.13.0
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 9 OK out of 11 attempts in PT0.029S
[info] o.j.d.c.b.ReadConfigurationBuilder [|] Set default timestamp provider MICRO
[info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=0a2482131-thehive-666975667f-jkvzn1
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.d.Backend [|] Configuring index [search]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/opt/thehive/lib/org.codehaus.groovy.groovy-2.5.14-indy.jar) to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 2
[info] o.j.d.Backend [|] Configuring total store cache size: 122418998
[info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2022-05-06T14:20:08.325027Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@62eb8eb0
[info] o.t.s.j.JanusDatabase [|] Full-text index is available (elasticsearch:[elasticsearch]) cluster
[info] o.r.Reflections [|] Reflections took 98 ms to scan 1 urls, producing 57 keys and 231 values
[info] o.r.Reflections [|] Reflections took 296 ms to scan 1 urls, producing 282 keys and 3027 values
[info] o.r.Reflections [|] Reflections took 36 ms to scan 1 urls, producing 57 keys and 298 values
[info] o.t.s.m.Database [|] Creating database schema
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (1): Create initial values
[info] o.t.s.m.Operations [2b6c621c6ed3bd27|507a673c5b95860a] Adding initial values for GDPRDummy
[info] o.t.s.m.Operations [a23596bbd485dfb3|7f39db0f41bc2d32] Adding initial values for Branding
[info] o.t.s.m.Operations [257c9dcd8581d595|bf4ed41878ce4112] Adding initial values for ResetPasswordToken
[info] o.t.s.m.Operations [2db063eb1ac4bb25|ac294137abf073a7] Adding initial values for LicenseData
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (2): Update graph: Add taskRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (3): Update graph: Add observableRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (4): Update graph: Add taskRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (5): Update graph: Add observableRule in share
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (6): Update graph: Add linkType in organisation edges
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (7): Update graph: Add vertex for each case custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (8): Update graph: Remove edge of case custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (9): Update graph: Add vertex for each alert custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (10): Update graph: Remove edge of alert custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (11): Update graph: Add vertex for each case template custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (12): Update graph: Remove edge of case template custom fields
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (13): Remove property order from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (14): Remove property stringValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (15): Remove property booleanValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (16): Remove property integerValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (17): Remove property floatValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (18): Remove property dateValue from AlertCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (19): Remove property order from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (20): Remove property stringValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (21): Remove property booleanValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (22): Remove property integerValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (23): Remove property floatValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (24): Remove property dateValue from CaseCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (25): Remove property order from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (26): Remove property stringValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (27): Remove property booleanValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (28): Remove property integerValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (29): Remove property floatValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (30): Remove property dateValue from CaseTemplateCustomFieldValue
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (31): Remove edge label CaseCustomField
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (32): Remove edge label AlertCustomField
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (33): Remove edge label CaseTemplateCustomField
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (34): Update graph: Add default group to custom field
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (35): Update graph: Add manageComment permission to org-admin and analyst profiles
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (36): Update graph: Add type to user
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (37): Update graph: Add lock to organisation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (38): Update graph: Add manageKnowledgeBase permission to org-admin and analyst profiles
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (39): Update graph: Add pap and ignoreSimilarity to observables
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (40): Update graph: Add version to dashboard
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (41): Update graph: Add manageCustomEvent to org-admin and analyst profiles
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (42): Update graph: Add taxonomy kind
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (43): Update database: Add AlertStatus initialValues
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (44): Update graph: Add alert status link
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (45): Update database: Add CaseStatus initialValues
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (46): Update graph: Add case status link
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (47): Update graph: Use dedicated field for license data
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (48): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (49): Update graph: Add KPI in alerts
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (50): Update graph: Add KPI in cases
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (51): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-enterprise (52): Update database: Set Pattern to reimport
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (1): Create initial values
[info] o.t.s.m.Operations [41517c09dfc8d662|10bbecf6e37b4efd] Adding initial values for Dashboard
[info] o.t.s.m.Operations [cd625538b3322bd0|80a149048056f792] Adding initial values for CustomField
[info] o.t.s.m.Operations [0ffaff0b6cd34c8f|eee5f197c3347e30] Adding initial values for CatalogOfPattern
[info] o.t.s.m.Operations [e540c6124ac171d9|d0995df65d576738] Adding initial values for Share
[info] o.t.s.m.Operations [bb4078c6cfa4df80|e11cc296aae171ea] Adding initial values for CaseStatus
[info] o.t.s.m.Operations [c8867b2213b8ddf1|1140bdb28ad908a6] Adding initial values for ImpactStatus
[info] o.t.s.m.Operations [42094a2e99453eeb|014b03686cc8f36b] Adding initial values for Profile
[info] o.t.s.m.Operations [d048f7dad73fca84|4a2edf0ce3c60e6e] Adding initial values for Task
[info] o.t.s.m.Operations [29739e51dacb4118|77ab9dfd2552351a] Adding initial values for Comment
[info] o.t.s.m.Operations [765df51b245a7bc1|ff3f50b3683b016e] Adding initial values for Case
[info] o.t.s.m.Operations [eebbffbbae6b8594|bae486b041facd09] Adding initial values for Audit
[info] o.t.s.m.Operations [f48b91451b7ea9da|d5ca102f426aab72] Adding initial values for User
[info] o.t.s.m.Operations [96490ea2d49d2889|37346cbfb3ebda1c] Adding initial values for CustomFieldValue
[info] o.t.s.m.Operations [0a75e3f63fe0f171|8db21069fee7afa6] Adding initial values for Role
[info] o.t.s.m.Operations [3c5ced0eceaf5bdc|8921e7f723393c2f] Adding initial values for Log
[info] o.t.s.m.Operations [f5dfde1e5088128d|1b2b24d8e40ff62e] Adding initial values for Attachment
[info] o.t.s.m.Operations [8d193868274bcad3|e11d49225f031a2a] Adding initial values for Procedure
[info] o.t.s.m.Operations [d276a1d2f37280f2|e7291c65037c7f0a] Adding initial values for Data
[info] o.t.s.m.Operations [cb129a30e4801eb4|cbefb392490c0194] Adding initial values for Tag
[info] o.t.s.m.Operations [fff163d4b074e71d|9b30421259945e96] Adding initial values for Alert
[info] o.t.s.m.Operations [7b6440f09c103811|619ee552d4274d34] Adding initial values for CustomEvent
[info] o.t.s.m.Operations [3a65fc33cb59f783|cd8ac48d0da0319e] Adding initial values for AlertStatus
[info] o.t.s.m.Operations [2bcb5dfad7d63e11|7c252ef950d8058c] Adding initial values for Pattern
[info] o.t.s.m.Operations [82d422614d56e72a|3777d1357f9fe294] Adding initial values for Config
[info] o.t.s.m.Operations [b903dba13df96b47|425738b83ccee0fd] Adding initial values for Organisation
[info] o.t.s.m.Operations [75db57223800a385|79d1b543e7e69be1] Adding initial values for Taxonomy
[info] o.t.s.m.Operations [5c26a4b0b83d51e9|31234af013324cf4] Adding initial values for Page
[info] o.t.s.m.Operations [cb4885780ad37d2c|300ea65e6d1be99b] Adding initial values for CaseTemplate
[info] o.t.s.m.Operations [541053f8e5329a94|92fefb3c12b67800] Adding initial values for Observable
[info] o.t.s.m.Operations [ec27949943e2a905|23e5bb13d1cd6037] Adding initial values for ObservableType
[info] o.t.s.m.Operations [ed7f86be3b542117|c6737b3153e1da45] Adding initial values for Tactic
[info] o.t.s.m.Operations [97e218b2f42eeca2|76de155000ba5e95] Adding initial values for ReportTag
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (2): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (3): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (4): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (5): Update database: Remove locks
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (6): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (7): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (8): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (9): Update graph: Remove cases with a Deleted status
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (10): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (11): Update graph: Add accessTheHiveFS permission to analyst and org-admin profiles
[info] o.t.s.m.Operations [|] Update graph in progress (0): Add accessTheHiveFS permission to analyst and org-admin profiles
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (12): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (13): Update graph: Add actionRequire property
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (14): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (15): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (16): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (17): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (18): Update database: Add Custom taxonomy vertex for each Organisation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (19): Update graph: Add each tag to its Organisation's FreeTags taxonomy
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (20): Update graph: Add manageTaxonomy to admin profile
[info] o.t.s.m.Operations [|] Update graph in progress (0): Add manageTaxonomy to admin profile
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (21): Update graph: Remove colour property for Tags
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (22): Remove property colour from Tag
[info] o.t.s.m.Database [|] Refuse to remove property colour because its type is what is expected (expected: SINGLE/class java.lang.Integer found: SINGLE/class java.lang.String)
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (23): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (24): Update graph: Add property colour for Tags
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (25): Update graph: Add managePattern permission to admin profile
[info] o.t.s.m.Operations [|] Update graph in progress (0): Add managePattern permission to admin profile
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (26): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (27): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (28): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (29): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (30): Update graph: Add tags, organisationId and caseId in alerts
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (31): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (32): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (33): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (34): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (35): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (36): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (37): Update graph: Add tags, assignee, organisationIds, impactStatus, resolutionStatus and caseTemplate data in cases
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (38): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (39): Update graph: Add tags in caseTemplates
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (40): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (41): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (42): Update graph: Add taskId and organisationIds data in logs
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (43): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (44): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (45): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (46): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (47): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (48): Update graph: Add dataType, tags, data, relatedId and organisationIds data in observables
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (49): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (50): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (51): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (52): Update graph: Add assignee, relatedId and organisationIds data in tasks
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (53): Update graph: Add managePlatform permission to admin profile
[info] o.t.s.m.Operations [|] Update graph in progress (0): Add managePlatform permission to admin profile
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (54): Update graph: Remove manageTag permission to admin profile
[info] o.t.s.m.Operations [|] Update graph in progress (0): Remove manageTag permission to admin profile
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (55): Update graph: Add manageTag permission to org-admin profile
[info] o.t.s.m.Operations [|] Update graph in progress (0): Add manageTag permission to org-admin profile
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (56): Update graph: Remove deleted logs and deleted property from logs
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (57): Remove property deleted from Log
[info] o.t.s.m.Database [|] Cannot remove the property deleted, it doesn't exist
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (58): Update graph: Make shared dashboard writable
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (59): Remove index Alert
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (60): Remove index Case
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (61): Remove index Log
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (62): Remove index Observable
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (63): Remove index Log
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (64): Remove index Tag
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (65): Remove index Task
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (66): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (67): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (68): Update graph: Set taskId in logs
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (69): Remove index Audit
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (70): Remove index Alert
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (71): Remove index _label_vertex_index
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (72): Remove index Case
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (73): Remove index Task
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (74): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (75): Update graph: Set caseId in imported alerts
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (76): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (77): Remove index Attachment
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (78): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (79): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (80): Remove index Log
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (81): Remove index Observable
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (82): Remove index Tag
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (83): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (84): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (85): Update graph: Add manageProcedure permission to org-admin and analyst profiles
[info] o.t.s.m.Operations [|] Update graph in progress (0): Add manageProcedure permission to org-admin and analyst profiles
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (86): Remove index Data
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (87): No operation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (88): Update graph: Add owning organisation in case
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (89): Remove index Tag
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (90): Remove index Alert
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (91): Remove index Organisation
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (92): Remove index Customfield
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (93): Remove index Profile
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (94): Remove index ImpactStatus
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (95): Remove index ObservableType
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (96): Remove index User
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (97): Remove index Case
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive (98): Remove index ResolutionStatus
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-cortex (1): Create initial values
[info] o.t.s.m.Operations [ec7650bbec065b31|dcc2def137f518d7] Adding initial values for Job
[info] o.t.s.m.Operations [a47ff82b2e07db11|d5ac26a58afc8e44] Adding initial values for Action
[info] o.t.s.m.Operations [c37b4255f22298df|651cf11a649c6f75] Adding initial values for AnalyzerTemplate
[info] o.t.s.m.Operations [|] *** UPDATE SCHEMA OF thehive-cortex (2): No operation
[info] a.c.s.ClusterSingletonManager [61832dce6b832e41|6388689b2f40d8f1] Singleton manager starting singleton actor [akka://application/user/config-actor-Manager/singleton]
[info] a.c.s.ClusterSingletonManager [61832dce6b832e41|6388689b2f40d8f1] ClusterSingletonManager state change [Start -> Oldest]
[info] a.c.s.ClusterSingletonProxy [61832dce6b832e41|4339239d0a64961d] Singleton identified at [akka://application/user/config-actor-Manager/singleton]
[info] o.t.s.s.S3StorageSrv [61832dce6b832e41|45cc9a9bb0de5edb] Starting S3 endpoint=http://minio:9000 bucket=thehive region=us-east-1
[warn] o.t.t.e.s.LicenseSrv [61832dce6b832e41|45cc9a9bb0de5edb] No license found
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerLdapSync/LdapSync]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] o.q.i.StdSchedulerFactory [|] Using default implementation for ThreadExecutor
[info] o.q.s.SimpleThreadPool [|] Job execution threads will use class loader of thread: main
[info] o.q.c.SchedulerSignalerImpl [|] Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
[info] o.q.c.QuartzScheduler [|] Quartz Scheduler v.2.3.2 created.
[info] o.q.s.RAMJobStore [|] RAMJobStore initialized.
[info] o.q.c.QuartzScheduler [|] Scheduler meta-data: Quartz Scheduler (v2.3.2) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
  Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
  NOT STARTED.
  Currently in standby mode.
  Number of jobs executed: 0
  Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
  Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.

[info] o.q.i.StdSchedulerFactory [|] Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
[info] o.q.i.StdSchedulerFactory [|] Quartz scheduler version: 2.3.2
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagergdprActor/gdprActor]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] o.t.t.e.s.GDPRActor [|] GDPR cleanup is disabled
[info] o.t.t.s.TOTPAuthSrv [|] creating multiAuth srv
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerIntegrityCheckActor/IntegrityCheckActor]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] o.t.t.ClusterListener [|] Member is Up: akka://application@10.36.130.19:33771
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerDataImporter/DataImporter]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerLdapSync/LdapSync]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagergdprActor/gdprActor]
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerlicenseActor/licenseActor]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] o.t.s.a.MultiAuthSrv [|] creating multiAuth srv
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerDataImporter/DataImporter]
[info] o.t.t.s.t.TaxonomyImportActor [|] Importing taxonomies into database...
[info] o.t.t.s.t.TaxonomyImporter [|] Importing taxonomy from file
[info] o.t.t.s.t.PatternImportActor [|] Pattern import forced
[info] o.t.t.s.t.PatternImportActor [|] Importing patterns into database...
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/user/flow-actor-Manager/singleton]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerlicenseActor/licenseActor]
[info] o.t.t.s.IntegrityCheck [|] Integrity checks is enabled and will start at Sat May 07 02:30:00 UTC 2022
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerIntegrityCheckActor/IntegrityCheckActor]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerIntegrityCheckActor/IntegrityCheckActor]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/user/flow-actor-Manager/singleton]
[info] o.q.c.QuartzScheduler [|] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
[info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerCortexDataImport/CortexDataImport]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
[info] p.c.s.AkkaHttpServer [|] Listening for HTTP on /0.0.0.0:9000
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerCortexDataImport/CortexDataImport]
[info] o.t.t.c.c.s.CortexDataImportActor [|] Importing analyzer templates
[info] o.t.t.s.IntegrityCheck [7064ae7a5fedcb35|b8d2479cb8f41efb] Integrity check on Organisation ( dedup ): job scheduled, it will start at Fri May 06 14:21:15 UTC 2022
[info] o.t.t.s.IntegrityCheck [89d8b96792ac748d|16d17df4b104a354] Start of deduplication of Organisation
[info] o.t.t.s.IntegrityCheck [89d8b96792ac748d|16d17df4b104a354] End of deduplication of Organisation:
  duplicate: 0
  duration: 131
[info] o.t.t.s.t.MitreImporter [|] 13 new patterns have been imported ...
[info] o.t.t.c.c.s.CortexDataImportActor [|] Template import finished, imported 181 templates
[info] o.t.t.s.t.MitreImporter [|] 75 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] 100 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] 100 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] 100 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] 100 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] 100 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] 100 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] Importing 14 tactics
[info] o.t.t.s.t.MitreImporter [|] 19 new patterns have been imported ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.MitreImporter [|] creating links for patterns ...
[info] o.t.t.s.t.PatternImportActor [|] Import finished, 707 patterns imported
[info] o.t.t.s.t.TaxonomyImportActor [|] Import of taxonomy finished, 134 taxonomies imported
fp-dshim commented 2 years ago

Set the replicas = 2, and the logs from the 2nd thehive pod:

Using cassandra address = 10.36.168.112
Using elasticsearch address = elasticsearch with index thehive
Using S3 http://minio:9000 bucket=thehive
Add Cortex cortex0: http://cortex:9001
Using Kubernetes with pod label selector 'app=thehive'
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
14:31:50.172 [main] DEBUG oshi.util.FileUtil - Reading file /proc/stat
14:31:50.355 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
14:31:50.355 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
14:31:50.356 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
14:31:50.356 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
14:31:50.357 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
14:31:50.477 [main] DEBUG oshi.util.FileUtil - Reading file /proc/cpuinfo
14:31:50.499 [main] DEBUG oshi.util.FileUtil - Reading file /etc/os-release
14:31:50.499 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: NAME="Debian GNU/Linux"
14:31:50.499 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION_ID="11"
14:31:50.500 [main] DEBUG oshi.software.os.linux.LinuxOperatingSystem - os-release: VERSION="11 (bullseye)"
14:31:50.500 [main] DEBUG oshi.util.FileUtil - Reading file /proc/version
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.FrontendModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.enterprise.EnterpriseModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.TheHiveModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v0.TheHiveModuleV0
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.v1.TheHiveModuleV1
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.controllers.dav.TheHiveFSModule
[info] o.t.s.ScalligraphApplication [|] Loading module org.thp.thehive.connector.cortex.CortexModule
[info] a.e.s.Slf4jLogger [|] Slf4jLogger started
[info] a.r.a.ArteryTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@10.36.140.37:39255] with UID [-6256948991755057775]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.37:39255] - Starting up, Akka version [2.6.18] ...
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.37:39255] - Registered cluster JMX MBean [akka:type=Cluster]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.37:39255] - Started up successfully
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.37:39255] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
[info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://application@10.36.140.37:39255#-6256948991755057775], selfDc [default].
[info] a.m.c.b.ClusterBootstrap [|] ClusterBootstrap loaded through 'akka.extensions' auto starting management and bootstrap.
[info] a.m.i.HealthChecksImpl [|] Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
[info] a.m.i.HealthChecksImpl [|] Loading liveness checks []
[info] a.m.s.AkkaManagement [|] Binding Akka Management (HTTP) endpoint to: 10.36.140.37:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterHttpManagementRouteProvider
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for ClusterBootstrap
[info] a.m.c.b.ClusterBootstrap [|] Using self contact point address: http://10.36.140.37:8558
[info] a.m.s.AkkaManagement [|] Including HTTP management routes for HealthCheckRoutes
[info] a.m.c.b.ClusterBootstrap [|] Initiating bootstrap procedure using kubernetes-api method...
[info] a.m.c.b.ClusterBootstrap [|] Bootstrap using `akka.discovery` method: kubernetes-api
[info] a.m.s.AkkaManagement [|] Bound Akka Management (HTTP) endpoint to: 10.36.140.37:8558
[info] a.m.c.b.i.BootstrapCoordinator [|] Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]
[info] a.m.c.b.i.BootstrapCoordinator [|] Looking up [Lookup(application,None,Some(tcp))]
[info] a.d.k.KubernetesApiServiceDiscovery [|] Querying for pods with label selector: [app=thehive]. Namespace: [thehive]. Port: [None]
[info] a.m.c.b.i.BootstrapCoordinator [|] Located service members based on: [Lookup(application,None,Some(tcp))]: [ResolvedTarget(10-36-130-19.thehive.pod.cluster.local,None,Some(/10.36.130.19)), ResolvedTarget(10-36-140-37.thehive.pod.cluster.local,None,Some(/10.36.140.37))], filtered to [10-36-140-37.thehive.pod.cluster.local:0, 10-36-130-19.thehive.pod.cluster.local:0]
[info] a.m.c.b.i.BootstrapCoordinator [|] Contact point [akka://application@10.36.130.19:33771] returned [1] seed-nodes [akka://application@10.36.130.19:33771]
[info] a.m.c.b.i.BootstrapCoordinator [|] Joining [akka://application@10.36.140.37:39255] to existing cluster [akka://application@10.36.130.19:33771]
[info] a.m.c.b.c.HttpClusterBootstrapRoutes [5963c906c1647775|1cc2ca60a854c072] Bootstrap request from 10.36.140.37:58662: Contact Point returning 0 seed-nodes []
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.37:39255] - Received InitJoinAck message from [Actor[akka://application@10.36.130.19:33771/system/cluster/core/daemon#-1558581155]] to [akka://application@10.36.140.37:39255]
[info] a.c.Cluster [|] Cluster Node [akka://application@10.36.140.37:39255] - Welcome from [akka://application@10.36.130.19:33771]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagerJanusGraphClusterLeader/JanusGraphClusterLeader]
[info] o.t.s.j.JanusDatabase [|] Loading database cassandra in cassandra
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] c.d.o.d.i.c.DefaultMavenCoordinates [|] DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.13.0
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 9 OK out of 11 attempts in PT0.027S
[info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=0a248c251-thehive-666975667f-k2wx71
[info] c.d.o.d.i.c.t.Clock [|] Using native clock for microsecond precision
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 10
[info] o.j.d.Backend [|] Configuring index [search]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/opt/thehive/lib/org.codehaus.groovy.groovy-2.5.14-indy.jar) to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[info] o.j.d.c.ExecutorServiceBuilder [|] Initiated fixed thread pool of size 2
[info] o.j.d.Backend [|] Configuring total store cache size: 124960495
[info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2022-05-06T14:32:01.229573Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@1ccdea63
[info] o.t.s.j.JanusDatabase [|] Full-text index is available (elasticsearch:[elasticsearch]) cluster
[info] a.c.s.ClusterSingletonManager [9f70fe2accc5bb05|3d9f262c319ec211] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [9f70fe2accc5bb05|a457b6e6fcc03612] Singleton identified at [akka://application@10.36.130.19:33771/user/config-actor-Manager/singleton]
[info] o.t.s.s.S3StorageSrv [9f70fe2accc5bb05|baba6197a00c2edf] Starting S3 endpoint=http://minio:9000 bucket=thehive region=us-east-1
[warn] o.t.t.e.s.LicenseSrv [9f70fe2accc5bb05|baba6197a00c2edf] No license found
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagerLdapSync/LdapSync]
[info] o.q.i.StdSchedulerFactory [|] Using default implementation for ThreadExecutor
[info] o.q.s.SimpleThreadPool [|] Job execution threads will use class loader of thread: main
[info] o.q.c.SchedulerSignalerImpl [|] Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
[info] o.q.c.QuartzScheduler [|] Quartz Scheduler v.2.3.2 created.
[info] o.q.s.RAMJobStore [|] RAMJobStore initialized.
[info] o.q.c.QuartzScheduler [|] Scheduler meta-data: Quartz Scheduler (v2.3.2) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
  Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
  NOT STARTED.
  Currently in standby mode.
  Number of jobs executed: 0
  Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
  Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.

[info] o.q.i.StdSchedulerFactory [|] Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
[info] o.q.i.StdSchedulerFactory [|] Quartz scheduler version: 2.3.2
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [61832dce6b832e41|6388689b2f40d8f1] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagergdprActor/gdprActor]
[info] o.t.t.s.TOTPAuthSrv [|] creating multiAuth srv
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagerIntegrityCheckActor/IntegrityCheckActor]
[info] o.t.t.ClusterListener [|] Member is Up: akka://application@10.36.130.19:33771
[info] o.t.t.ClusterListener [|] Member is Up: akka://application@10.36.140.37:39255
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagerDataImporter/DataImporter]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagerlicenseActor/licenseActor]
[info] o.t.s.a.MultiAuthSrv [|] creating multiAuth srv
[info] o.r.Reflections [|] Reflections took 77 ms to scan 1 urls, producing 57 keys and 231 values
[info] o.r.Reflections [|] Reflections took 315 ms to scan 1 urls, producing 282 keys and 3027 values
[info] o.r.Reflections [|] Reflections took 40 ms to scan 1 urls, producing 57 keys and 298 values
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/user/flow-actor-Manager/singleton]
[info] o.q.c.QuartzScheduler [|] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
[info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Younger]
[info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application@10.36.130.19:33771/system/singletonManagerCortexDataImport/CortexDataImport]
[info] p.c.s.AkkaHttpServer [|] Listening for HTTP on /0.0.0.0:9000
vdebergue commented 2 years ago

Seems like it's working now 👍

I don't really know why your db got corrupted like this when starting. I'll update the documentation to help troubleshooting for this kind of issues

fp-dshim commented 2 years ago

Thanks @vdebergue for your help!