Open vxsh4d0w opened 3 years ago
can you restart the service of TheHive and tell what is happening please ?
Jerome, here is the extract after restarting TheHive:
Attaching to thehive
thehive | [info] ScalligraphApplication [|] Loading application ...
thehive | [info] o.t.s.ScalligraphModule [|] Loading scalligraph module
thehive | [info] a.e.s.Slf4jLogger [|] Slf4jLogger started
thehive | [info] a.r.a.t.ArteryTcpTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@127.0.0.1:39533] with UID [-2727787471994182156]
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - Starting up, Akka version [2.6.10] ...
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - Registered cluster JMX MBean [akka:type=Cluster]
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - Started up successfully
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
thehive | [info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://application@127.0.0.1:39533#-2727787471994182156], selfDc [default].
thehive | [info] o.r.Reflections [|] Reflections took 1210 ms to scan 1 urls, producing 160 keys and 2419 values
thehive | [info] o.t.t.ClusterSetup [|] Initialising cluster
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - Node [akka://application@127.0.0.1:39533] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - is the new leader among reachable nodes (more leaders may exist)
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39533] - Leader is moving node [akka://application@127.0.0.1:39533] to [Up]
thehive | [info] o.t.t.ClusterListener [|] Member is Up: akka://application@127.0.0.1:39533
thehive | [info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
thehive | [info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerJanusClusterManager/JanusClusterManager]
thehive | [info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
thehive | [info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusClusterManager/JanusClusterManager]
thehive | [info] c.d.driver.core [|] DataStax Java driver 3.9.0 for Apache Cassandra
thehive | [info] c.d.d.c.GuavaCompatibility [|] Detected Guava >= 19 in the classpath, using modern compatibility layer
thehive | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive | [info] c.d.d.c.NettyUtil [|] Found Netty's native epoll transport in the classpath, using it
thehive | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
thehive | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.18.0.2:9042 added
thehive | [info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 8 OK out of 13 attempts in PT0.059S
thehive | [warn] o.j.d.c.b.ReadConfigurationBuilder [|] Local setting index.search.index-name=scalligraph (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
thehive | [info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=ac1200047-75d0292baddc1
thehive | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
thehive | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.18.0.2:9042 added
thehive | [info] o.j.d.Backend [|] Configuring index [search]
thehive | [info] o.j.d.Backend [|] Initiated backend operations thread pool of size 8
thehive | [info] o.j.d.Backend [|] Configuring total store cache size: 1102343736
thehive | [info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2021-03-23T18:21:24.142Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@25eb8f51
thehive | [info] o.t.s.j.JanusDatabase [|] Full-text index is available (lucene:/opt/index) single node
thehive | [info] o.r.Reflections [|] Reflections took 86 ms to scan 1 urls, producing 46 keys and 267 values
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ObservableJob
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model Job
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ActionContext
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ReportObservable
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model Action
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model AnalyzerTemplate
thehive | [info] o.t.s.m.Operations [|] thehive: Update graph: Add dataType, tags, data, relatedId and organisationIds data in observables
thehive | [info] ScalligraphApplication [|] Loading application ...
thehive | [info] o.t.s.ScalligraphModule [|] Loading scalligraph module
thehive | [info] a.e.s.Slf4jLogger [|] Slf4jLogger started
thehive | [info] a.r.a.t.ArteryTcpTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@127.0.0.1:36471] with UID [-8775557803415898648]
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - Starting up, Akka version [2.6.10] ...
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - Registered cluster JMX MBean [akka:type=Cluster]
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - Started up successfully
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
thehive | [info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://application@127.0.0.1:36471#-8775557803415898648], selfDc [default].
thehive | [info] o.r.Reflections [|] Reflections took 276 ms to scan 1 urls, producing 160 keys and 2419 values
thehive | [info] o.t.t.ClusterSetup [|] Initialising cluster
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - Node [akka://application@127.0.0.1:36471] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - is the new leader among reachable nodes (more leaders may exist)
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:36471] - Leader is moving node [akka://application@127.0.0.1:36471] to [Up]
thehive | [info] o.t.t.ClusterListener [|] Member is Up: akka://application@127.0.0.1:36471
thehive | [info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
thehive | [info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerJanusClusterManager/JanusClusterManager]
thehive | [info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
thehive | [info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusClusterManager/JanusClusterManager]
thehive | [info] c.d.driver.core [|] DataStax Java driver 3.9.0 for Apache Cassandra
thehive | [info] c.d.d.c.GuavaCompatibility [|] Detected Guava >= 19 in the classpath, using modern compatibility layer
thehive | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive | [info] c.d.d.c.NettyUtil [|] Found Netty's native epoll transport in the classpath, using it
thehive | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
thehive | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.18.0.2:9042 added
thehive | [info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 8 OK out of 13 attempts in PT0.028S
thehive | [warn] o.j.d.c.b.ReadConfigurationBuilder [|] Local setting index.search.index-name=scalligraph (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
thehive | [info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=ac1200047-75d0292baddc1
thehive | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
thehive | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.18.0.2:9042 added
thehive | [info] o.j.d.Backend [|] Configuring index [search]
thehive | [info] o.j.d.Backend [|] Initiated backend operations thread pool of size 8
thehive | [info] o.j.d.Backend [|] Configuring total store cache size: 1027583383
thehive | [info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2021-03-23T18:28:41.093Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@535055b2
thehive | [info] o.t.s.j.JanusDatabase [|] Full-text index is available (lucene:/opt/index) single node
thehive | [info] o.r.Reflections [|] Reflections took 23 ms to scan 1 urls, producing 46 keys and 267 values
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ObservableJob
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model Action
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ActionContext
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model AnalyzerTemplate
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model Job
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ReportObservable
thehive | [info] o.t.s.m.Operations [|] thehive: Update graph: Add dataType, tags, data, relatedId and organisationIds data in observables
Maybe you should wait a bit. We had the same problem, but after some time spent for reindexing data thehive avialable again
After 12 hours the instance is still unreachable. And this is last part of logs:
| Oops, cannot start the server.
thehive | java.util.concurrent.TimeoutException: Futures timed out after [1 hour]
thehive | at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:259)
thehive | at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:263)
thehive | at scala.concurrent.Await$.$anonfun$result$1(package.scala:223)
thehive | at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:57)
thehive | at scala.concurrent.Await$.result(package.scala:146)
thehive | at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$1(JanusDatabaseProvider.scala:134)
thehive | at scala.Option.map(Option.scala:230)
thehive | at org.thp.scalligraph.janus.JanusDatabaseProvider.get$lzycompute(JanusDatabaseProvider.scala:99)
thehive | at org.thp.scalligraph.janus.JanusDatabaseProvider.get(JanusDatabaseProvider.scala:89)
thehive | at org.thp.scalligraph.janus.JanusDatabaseProvider.get(JanusDatabaseProvider.scala:24)
thehive | at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:85)
thehive | at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:77)
thehive | at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:59)
thehive | at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:61)
thehive | at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:42)
thehive | at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:65)
thehive | at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113)
thehive | at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
thehive | at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:306)
thehive | at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
thehive | at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:168)
thehive | at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
thehive | at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:62)
thehive | at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:213)
thehive | at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:184)
thehive | at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:111)
thehive | at com.google.inject.Guice.createInjector(Guice.java:87)
thehive | at com.google.inject.Guice.createInjector(Guice.java:78)
thehive | at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:200)
thehive | at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:155)
thehive | at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
thehive | at play.core.server.ProdServerStart$.start(ProdServerStart.scala:54)
thehive | at play.core.server.ProdServerStart$.main(ProdServerStart.scala:30)
thehive | at play.core.server.ProdServerStart.main(ProdServerStart.scala)
thehive | [error] o.t.s.u.Retry [|0c74ae65] uncaught error, not retrying
thehive | java.lang.IllegalStateException: Cannot access element because its enclosing transaction is closed and unbound
thehive | at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getNextTx(StandardJanusGraphTx.java:305)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.it(AbstractVertex.java:53)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.it(AbstractVertex.java:37)
thehive | at org.janusgraph.graphdb.internal.AbstractElement.isRemoved(AbstractElement.java:141)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.verifyAccess(AbstractVertex.java:89)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.query(AbstractVertex.java:137)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.query(AbstractVertex.java:37)
thehive | at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphVertexStep.flatMap(JanusGraphVertexStep.java:191)
thehive | at org.apache.tinkerpop.gremlin.process.traversal.step.map.FlatMapStep.processNextStart(FlatMapStep.java:49)
thehive | at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphVertexStep.processNextStart(JanusGraphVertexStep.java:177)
thehive | [error] o.t.s.m.Database [|0c74ae65] Exception raised, rollback (Cannot access element because its enclosing transaction is c losed and unbound)
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model PatternPattern
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Observable
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model TaxonomyTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Data
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AuditUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ProcedurePattern
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model OrganisationConfig
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ResolutionStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseProcedure
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Tag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ImpactStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableKeyValue
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Share
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableAttachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseTemplateTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model UserRole
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model UserConfig
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Alert
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model UserAttachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model MergedFrom
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableData
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Role
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Dashboard
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ShareCase
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Attachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model RoleOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseTemplateTask
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model OrganisationTaxonomy
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AlertTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ShareObservable
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Case
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Task
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model User
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Page
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Log
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Audit
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AlertObservable
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model OrganisationShare
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Pattern
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ShareProfile
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseTemplateOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseCaseTemplate
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableType
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ReportTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Taxonomy
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseCustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model OrganisationOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AlertOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableObservableType
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Config
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AlertCaseTemplate
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseResolutionStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model DashboardUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model LogAttachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseTemplateCustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ObservableReportTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model TaskUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model RoleProfile
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseTemplate
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AlertCustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Organisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model ShareTask
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model OrganisationPage
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model OrganisationDashboard
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AlertCase
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Profile
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model AuditContext
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model CaseImpactStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model TaskLog
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model Procedure
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|] Loading model KeyValue
thehive | [error] o.t.s.u.Retry [|] uncaught error, not retrying
thehive | java.lang.IllegalStateException: Can't use this cluster instance because it was previously closed
thehive | at com.datastax.driver.core.Cluster.checkNotClosed(Cluster.java:652)
thehive | at com.datastax.driver.core.Cluster.access$400(Cluster.java:112)
thehive | at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1621)
thehive | at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:462)
thehive | at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$openDatabase$12(CQLStoreManager.java:419)
thehive | at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.<init>(CQLKeyColumnValueStore.java:160)
thehive | at org.janusgraph.diskstorage.cql.CQLStoreManager.lambda$openDatabase$14(CQLStoreManager.java:420)
thehive | at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
thehive | at org.janusgraph.diskstorage.cql.CQLStoreManager.openDatabase(CQLStoreManager.java:420)
thehive | at org.janusgraph.diskstorage.log.kcvs.KCVSLogManager.openLog(KCVSLogManager.java:218)
thehive | [info] ScalligraphApplication [|] Loading application ...
thehive | [info] o.t.s.ScalligraphModule [|] Loading scalligraph module
thehive | [info] a.e.s.Slf4jLogger [|] Slf4jLogger started
thehive | [info] a.r.a.t.ArteryTcpTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://application@ 127.0.0.1:39063] with UID [6309795356389132573]
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - Starting up, Akka version [2.6.10] ...
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - Registered cluster JMX MBean [akka:type=Cluster]
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - Started up successfully
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - No seed-nodes configured, manual cluster join requ ired, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
thehive | [info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-un stable [15 seconds], selfUniqueAddress [akka://application@127.0.0.1:39063#6309795356389132573], selfDc [default].
thehive | [info] o.r.Reflections [|] Reflections took 281 ms to scan 1 urls, producing 160 keys and 2419 values
thehive | [info] o.t.t.ClusterSetup [|] Initialising cluster
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - Node [akka://application@127.0.0.1:39063] is JOINI NG itself (with roles [dc-default], version [0.0.0]) and forming new cluster
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - is the new leader among reachable nodes (more lead ers may exist)
thehive | [info] a.c.Cluster [|] Cluster Node [akka://application@127.0.0.1:39063] - Leader is moving node [akka://application@127.0.0. 1:39063] to [Up]
thehive | [info] o.t.t.ClusterListener [|] Member is Up: akka://application@127.0.0.1:39063
thehive | [info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable node s (more leaders may exist).
thehive | [info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManag erJanusClusterManager/JanusClusterManager]
thehive | [info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
thehive | [info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusClusterManager /JanusClusterManager]
thehive | [info] c.d.driver.core [|] DataStax Java driver 3.9.0 for Apache Cassandra
thehive | [info] c.d.d.c.GuavaCompatibility [|] Detected Guava >= 19 in the classpath, using modern compatibility layer
thehive | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive | [info] c.d.d.c.NettyUtil [|] Found Netty's native epoll transport in the classpath, using it
thehive | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is inc orrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
thehive | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.18.0.2:9042 added
thehive | [info] o.j.c.u.ReflectiveConfigOptionLoader [|] Loaded and initialized config classes: 8 OK out of 13 attempts in PT0.027S
thehive | [warn] o.j.d.c.b.ReadConfigurationBuilder [|] Local setting index.search.index-name=scalligraph (Type: GLOBAL_OFFLINE) is ove rridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
thehive | [info] o.j.g.i.UniqueInstanceIdRetriever [|] Generated unique-instance-id=ac1200067-652e2e092c081
thehive | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is inc orrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
thehive | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.18.0.2:9042 added
thehive | [info] o.j.d.Backend [|] Configuring index [search]
thehive | [info] o.j.d.Backend [|] Initiated backend operations thread pool of size 8
thehive | [info] o.j.d.Backend [|] Configuring total store cache size: 1024306634
thehive | [info] o.j.d.l.k.KCVSLog [|] Loaded unidentified ReadMarker start time 2021-03-25T07:15:54.467Z into org.janusgraph.diskstora ge.log.kcvs.KCVSLog$MessagePuller@48b71fc5
thehive | [info] o.t.s.j.JanusDatabase [|] Full-text index is available (lucene:/opt/index) single node
thehive | [info] o.r.Reflections [|] Reflections took 23 ms to scan 1 urls, producing 46 keys and 267 values
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model Job
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model AnalyzerTemplate
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ObservableJob
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ReportObservable
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model Action
thehive | [info] o.t.t.c.c.m.CortexSchemaDefinition [|] Loading model ActionContext
thehive | [info] o.t.s.m.Operations [|] thehive: Update graph: Add each tag to its Organisation's FreeTags taxonomy
thehive | [warn] o.t.t.m.TheHiveSchemaDefinition [|79b75365] Tag tae is not linked to any organisation
thehive | [warn] o.t.t.m.TheHiveSchemaDefinition [|79b75365] Tag misp:category="Persistence mechanism" is not linked to any organisatio
Please use the code
markdown syntax for the logs
Sorry for the mistake, I fixed the syntax.
So after four days the migration to TheHive 4.1.1 is completed but:
[error] o.t.s.t.TraversalOps [0000007d|4f048f6b] Observable 663167008 doesn't comply with its schema, field dataType is missing:
thehive | v[663167008]
thehive | - _label = Observable
thehive | - _createdBy =
thehive | - _createdAt = Fri Mar 05 11:17:46 UTC 2021
thehive | - tlp = 3
thehive | - ioc = false
thehive | - ignoreSimilarity = false
thehive | - sighted = false
thehive | - message =
thehive | [error] o.t.s.t.TraversalOps [0000007d|4f048f6b] Observable 663167008 doesn't comply with its schema, field relatedId is missing:
thehive | v[663167008]
thehive | - _label = Observable
thehive | - _createdBy =
thehive | - _createdAt = Fri Mar 05 11:17:46 UTC 2021
thehive | - tlp = 3
thehive | - ioc = false
thehive | - ignoreSimilarity = false
thehive | - sighted = false
thehive | - message =
[error] o.t.s.t.TraversalOps [|] Task 623349880 doesn't comply with its schema, field relatedId is missing:
> thehive | v[623349880]
> thehive | - _label = Task
> thehive | - _createdBy =
> thehive | - _createdAt = Fri Mar 05 15:08:33 UTC 2021
> thehive | - description =
Furthermore on the login page and after logging we received this message:
![error](https://user-images.githubusercontent.com/45754825/112798858-ccfe8780-906d-11eb-8b82-4d58e6714220.PNG)
Then, after restarting the container we got these logs:
[error] c.d.d.c.ControlConnection [|] [Control connection] Cannot connect to any host, scheduling retry in 8000 milliseconds thehive | [info] o.j.d.u.BackendOperation [0000028d|2fbf6538] Temporary exception during backend operation [EdgeStoreQuery]. Attempting backoff retry. thehive | org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend thehive | at io.vavr.API$Match$Case0.apply(API.java:3174) thehive | at io.vavr.API$Match.of(API.java:3137) thehive | at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:123) thehive | at io.vavr.control.Try.getOrElseThrow(Try.java:671) thehive | at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:290) thehive | at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:76) thehive | at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.lambda$getSlice$1(ExpirationKCVSCache.java:91) thehive | at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4876) thehive | at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) thehive | at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) thehive | Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried) thehive | at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:553) thehive | at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:514) thehive | at io.vavr.control.Try.of(Try.java:62) thehive | at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199) thehive | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) thehive | at java.util.concurrent.FutureTask.run(FutureTask.java:266) thehive | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) thehive | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) thehive | at java.lang.Thread.run(Thread.java:748) thehive | Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried) thehive | at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:283) thehive | at com.datastax.driver.core.RequestHandler.access$1200(RequestHandler.java:61) thehive | at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:375) thehive | at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:139) thehive | at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:121) thehive | at com.datastax.driver.core.SessionManager.execute(SessionManager.java:705) thehive | at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:142) thehive | at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:282) thehive | at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:76) thehive | at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.lambda$getSlice$1(ExpirationKCVSCache.java:91) thehive | [warn] o.t.s.u.Retry [0000021e|46cc9d06] An error occurs (Could not execute operation due to backend exception), retrying (1) thehive | [warn] o.t.s.u.Retry [0000021e|46cc9d06] An error occurs (null), retrying (2) thehive | [error] o.t.s.ErrorHandler [|] Internal error thehive | java.util.NoSuchElementException: null thehive | at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.$anonfun$next$1(TraversalOps.scala:69) thehive | at scala.Option.getOrElse(Option.scala:189) thehive | at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.next(TraversalOps.scala:69) thehive | at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.getCount(TraversalOps.scala:100) thehive | at org.thp.thehive.controllers.v1.AdminCtrl.$anonfun$indexStatus$3(AdminCtrl.scala:75) thehive | at scala.collection.immutable.List.map(List.scala:297) thehive | at org.thp.thehive.controllers.v1.AdminCtrl.$anonfun$indexStatus$2(AdminCtrl.scala:74) thehive | at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$authPermittedRoTransaction$2(Entrypoint.scala:137) thehive | at org.thp.scalligraph.janus.JanusDatabase.roTransaction(JanusDatabase.scala:193) thehive | at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$authPermittedRoTransaction$1(Entrypoint.scala:137) thehive | [warn] o.t.s.ErrorHandler [|] GET /api/v1/admin/index/status returned 500 thehive | java.util.NoSuchElementException: null thehive | at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.$anonfun$next$1(TraversalOps.scala:69) thehive | at scala.Option.getOrElse(Option.scala:189) thehive | at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs$$anon$1.next(TraversalOps.scala:69) thehive | at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.getCount(TraversalOps.scala:100) thehive | at org.thp.thehive.controllers.v1.AdminCtrl.$anonfun$indexStatus$3(AdminCtrl.scala:75) thehive | at scala.collection.immutable.List.map(List.scala:297) thehive | at org.thp.thehive.controllers.v1.AdminCtrl.$anonfun$indexStatus$2(AdminCtrl.scala:74) thehive | at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$authPermittedRoTransaction$2(Entrypoint.scala:137) thehive | at org.thp.scalligraph.janus.JanusDatabase.roTransaction(JanusDatabase.scala:193) thehive | at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$authPermittedRoTransaction$1(Entrypoint.scala:137) thehive | [warn] o.t.s.u.Retry [0000021d|1c4965fc] An error occurs (Could not execute operation due to backend exception), retrying (1) thehive | [warn] o.t.s.u.Retry [0000021d|1c4965fc] An error occurs (null), retrying (2) thehive | [info] o.j.d.u.BackendOperation [|767a4755] Temporary exception during backend operation [VertexIndexQuery]. Attempting backoff retry. thehive | org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend thehive | at io.vavr.API$Match$Case0.apply(API.java:3174) thehive | at io.vavr.API$Match.of(API.java:3137) thehive | at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:123) thehive | at io.vavr.control.Try.getOrElseThrow(Try.java:671) thehive | at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:290) thehive | at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:76) thehive | at org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.lambda$getSlice$1(ExpirationKCVSCache.java:91) thehive | at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4876) thehive | at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) thehive | at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
and the web gui is not reachable.
This status in completely unexpected at all: The expected database schema version is 67 and you get 19... I don't really know how you get this.
Don't tell it please....It seems I'm not lucky. I'm trying to understand what could be the root cause.
The schema version is the number of the database updates introduces by the version over time. Every time you install a new version, TheHive checks the database schema and updates it to make sure all is OK. For example, when we introduce taxonomy support, we create new database object types etc...
So I don't even know how your db schema version is 19 here.
That's a good question. At the moment I have one instance of TheHive 4.0.5 with Cassandra and I configured several custom-fields, several case templates (that i wanted to share with you) and a company taxonomy. And then the integration with Cortex 3.1.1, Misp 2.4.140 and OpenCTI 4.3.2. I don't know if these info can be useful somehow.
I add another piece of info
thehive | [error] o.t.s.u.Retry [|3fa7941d] uncaught error, not retrying
thehive | java.lang.IllegalStateException: Cannot access element because its enclosing transaction is closed and unbound
thehive | at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getNextTx(StandardJanusGraphTx.java:305)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.it(AbstractVertex.java:53)
thehive | at org.janusgraph.graphdb.vertices.AbstractVertex.it(AbstractVertex.java:37)
thehive | at org.janusgraph.graphdb.internal.AbstractElement.isLoaded(AbstractElement.java:136)
thehive | at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder.useSimpleQueryProcessor(BasicVertexCentricQueryBuilder.java:281)
thehive | at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder.executeIndividualVertices(BasicVertexCentricQueryBuilder.java:341)
thehive | at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder.executeVertices(BasicVertexCentricQueryBuilder.java:335)
thehive | at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder$VertexConstructor.getResult(BasicVertexCentricQueryBuilder.java:241)
thehive | at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder$VertexConstructor.getResult(BasicVertexCentricQueryBuilder.java:237)
thehive | at org.janusgraph.graphdb.query.vertex.VertexCentricQueryBuilder.execute(VertexCentricQueryBuilder.java:86)
thehive | [error] o.t.s.m.Database [|3fa7941d] Exception raised, rollback (Cannot access element because its enclosing transaction is closed and unbound)
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ShareObservable
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model KeyValue
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model RoleOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Task
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableKeyValue
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model DashboardUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AlertTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableData
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Log
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ReportTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Organisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableType
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseImpactStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Case
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model UserAttachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model OrganisationConfig
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ShareTask
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Config
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model TaskLog
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model TaxonomyTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseTemplateTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AlertOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model UserRole
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model UserConfig
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Page
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model LogAttachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseCustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableReportTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AlertObservable
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model OrganisationDashboard
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ImpactStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseTemplateTask
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model PatternPattern
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model OrganisationOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Dashboard
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ProcedurePattern
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ShareCase
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Share
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableAttachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model User
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Tag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Attachment
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model MergedFrom
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Profile
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableTag
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ObservableObservableType
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseCaseTemplate
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Alert
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Procedure
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model RoleProfile
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model OrganisationTaxonomy
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model TaskUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseTemplateCustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AuditUser
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AlertCaseTemplate
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseTemplateOrganisation
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Data
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Observable
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseResolutionStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseProcedure
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Taxonomy
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ResolutionStatus
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Role
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AuditContext
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AlertCase
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model AlertCustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model ShareProfile
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Audit
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CustomField
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model OrganisationShare
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model Pattern
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model OrganisationPage
thehive | [info] o.t.t.m.TheHiveSchemaDefinition [|6fc9bcbe] Loading model CaseTemplate
This log was produced today during another migration attempt from TheHive 4.0.5 to 4.1.2.
Hello @vxsh4d0w We are going to troubleshoot this as we want to understand how you got this unexpected behavior. We are on it
Feels like hit the same or at least a very similar issue, while trying to update TheHive from 4.0.5 to 4.1.3 (Debian 10, non docker environment, separate full ES cluster for the index). The migration starts fine and during or after conversion of the observables the process crashes with the same IllegalStateException.
2021-04-13 12:53:47,598 [INFO] from org.thp.scalligraph.models.Operations in application-akka.actor.default-dispatcher-17 [|] *** UPDATE SCHEMA OF thehive (48): Update graph in progress (145000): Add dataType, tags, data, relatedId and organisationIds data in observables
2021-04-13 12:53:47,975 [INFO] from org.thp.scalligraph.models.Operations in application-akka.actor.default-dispatcher-17 [|] *** UPDATE SCHEMA OF thehive (48): Update graph in progress (145100): Add dataType, tags, data, relatedId and organisationIds data in observables
2021-04-13 12:53:48,345 [INFO] from org.thp.scalligraph.models.Operations in application-akka.actor.default-dispatcher-17 [|] *** UPDATE SCHEMA OF thehive (48): Update graph in progress (145200): Add dataType, tags, data, relatedId and organisationIds data in observables
2021-04-13 12:53:48,653 [ERROR] from org.thp.scalligraph.utils.Retry in application-akka.actor.default-dispatcher-17 [|1372f71d] uncaught error, not retrying
java.lang.IllegalStateException: Cannot access element because its enclosing transaction is closed and unbound
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.getNextTx(StandardJanusGraphTx.java:305)
at org.janusgraph.graphdb.vertices.AbstractVertex.it(AbstractVertex.java:53)
at org.janusgraph.graphdb.vertices.AbstractVertex.it(AbstractVertex.java:37)
at org.janusgraph.graphdb.internal.AbstractElement.isLoaded(AbstractElement.java:136)
at org.janusgraph.graphdb.types.vertices.JanusGraphSchemaVertex.name(JanusGraphSchemaVertex.java:49)
at org.janusgraph.graphdb.vertices.AbstractVertex.label(AbstractVertex.java:121)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.HasContainer.testLabel(HasContainer.java:111)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.HasContainer.test(HasContainer.java:82)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.HasContainer.testAll(HasContainer.java:181)
at org.apache.tinkerpop.gremlin.process.traversal.step.filter.HasStep.filter(HasStep.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.filter.FilterStep.processNextStart(FilterStep.java:38)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.filter.FilterStep.processNextStart(FilterStep.java:37)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.MapStep.processNextStart(MapStep.java:36)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:197)
at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:43)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at org.thp.scalligraph.traversal.TraversalOps$TraversalOpsDefs.foreach(TraversalOps.scala:97)
at org.thp.thehive.models.TheHiveSchemaDefinition.$anonfun$operations$93(TheHiveSchemaDefinition.scala:334)
at org.thp.scalligraph.models.Operations.$anonfun$execute$5(Operation.scala:97)
at org.thp.scalligraph.janus.JanusDatabase.$anonfun$tryTransaction$7(JanusDatabase.scala:241)
at scala.util.Try$.apply(Try.scala:213)
at org.thp.scalligraph.janus.JanusDatabase.$anonfun$tryTransaction$6(JanusDatabase.scala:241)
at scala.util.Try$.apply(Try.scala:213)
at org.thp.scalligraph.utils.DelayRetry.withTry(Retry.scala:93)
at org.thp.scalligraph.janus.JanusDatabase.tryTransaction(JanusDatabase.scala:238)
at org.thp.scalligraph.models.Operations.$anonfun$execute$4(Operation.scala:96)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
at org.thp.scalligraph.models.Operations.$anonfun$execute$2(Operation.scala:93)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at org.thp.scalligraph.models.Operations.execute(Operation.scala:67)
at org.thp.scalligraph.models.UpdatableSchema.update(Schema.scala:26)
at org.thp.scalligraph.models.UpdatableSchema.update$(Schema.scala:25)
at org.thp.thehive.models.TheHiveSchemaDefinition.update(TheHiveSchemaDefinition.scala:27)
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$5(JanusDatabaseProvider.scala:124)
at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:17)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
at scala.collection.immutable.Set$Set2.foreach(Set.scala:181)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
at org.thp.scalligraph.package$RichSeq.toTry(package.scala:16)
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:124)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$2(ContextPropagatingDisptacher.scala:57)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.thp.scalligraph.DiagnosticContext$$anon$2.withContext(ContextPropagatingDisptacher.scala:77)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$1(ContextPropagatingDisptacher.scala:57)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
2021-04-13 12:53:48,653 [ERROR] from org.thp.scalligraph.models.Database in application-akka.actor.default-dispatcher-17 [|1372f71d] Exception raised, rollback (Cannot access element because its enclosing transaction is closed and unbound)
2021-04-13 12:53:48,765 [INFO] from org.janusgraph.diskstorage.util.BackendOperation in application-akka.actor.default-dispatcher-17 [|3f6770ed] Temporary exception during backend operation [getConfiguration]. Attempting backoff retry.
org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at io.vavr.API$Match$Case0.apply(API.java:3174)
at io.vavr.API$Match.of(API.java:3137)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0(CQLKeyColumnValueStore.java:123)
at io.vavr.control.Try.getOrElseThrow(Try.java:671)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:290)
at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:76)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:97)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:94)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:147)
at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:161)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:158)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:94)
at org.janusgraph.graphdb.tinkerpop.JanusGraphVariables.get(JanusGraphVariables.java:46)
at org.thp.scalligraph.models.BaseDatabase.$anonfun$version$1(Database.scala:105)
at org.thp.scalligraph.models.BaseDatabase.$anonfun$version$1$adapted(Database.scala:103)
at org.thp.scalligraph.janus.JanusDatabase.roTransaction(JanusDatabase.scala:182)
at org.thp.scalligraph.models.BaseDatabase.version(Database.scala:103)
at org.thp.scalligraph.models.UpdatableSchema.update(Schema.scala:27)
at org.thp.scalligraph.models.UpdatableSchema.update$(Schema.scala:25)
at org.thp.thehive.models.TheHiveSchemaDefinition.update(TheHiveSchemaDefinition.scala:27)
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$5(JanusDatabaseProvider.scala:124)
at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:17)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
at scala.collection.immutable.Set$Set2.foreach(Set.scala:181)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
at org.thp.scalligraph.package$RichSeq.toTry(package.scala:16)
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:124)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$2(ContextPropagatingDisptacher.scala:57)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.thp.scalligraph.DiagnosticContext$$anon$2.withContext(ContextPropagatingDisptacher.scala:77)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$1(ContextPropagatingDisptacher.scala:57)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Could not send request, session is closed
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:553)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:514)
at io.vavr.control.Try.of(Try.java:62)
at io.vavr.concurrent.FutureImpl.lambda$run$2(FutureImpl.java:199)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalStateException: Could not send request, session is closed
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:701)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:142)
at org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getSlice(CQLKeyColumnValueStore.java:282)
at org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:76)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:97)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:94)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:147)
at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:161)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:158)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:94)
at org.janusgraph.graphdb.tinkerpop.JanusGraphVariables.get(JanusGraphVariables.java:46)
at org.thp.scalligraph.models.BaseDatabase.$anonfun$version$1(Database.scala:105)
at org.thp.scalligraph.models.BaseDatabase.$anonfun$version$1$adapted(Database.scala:103)
at org.thp.scalligraph.janus.JanusDatabase.roTransaction(JanusDatabase.scala:182)
at org.thp.scalligraph.models.BaseDatabase.version(Database.scala:103)
at org.thp.scalligraph.models.UpdatableSchema.update(Schema.scala:27)
at org.thp.scalligraph.models.UpdatableSchema.update$(Schema.scala:25)
at org.thp.thehive.models.TheHiveSchemaDefinition.update(TheHiveSchemaDefinition.scala:27)
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$5(JanusDatabaseProvider.scala:124)
at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:17)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
at scala.collection.immutable.Set$Set2.foreach(Set.scala:181)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
at org.thp.scalligraph.package$RichSeq.toTry(package.scala:16)
at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:124)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$2(ContextPropagatingDisptacher.scala:57)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.thp.scalligraph.DiagnosticContext$$anon$2.withContext(ContextPropagatingDisptacher.scala:77)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$1(ContextPropagatingDisptacher.scala:57)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
I can also provide the full log of our upgrade run and the db.janusgraph part of our config, if needed. For whatever reason Github, doesn't let me attach it straight away though.
Fixed my local file attachment issue. :-) Here is the full log and the db config part application_db_conf.txt application_log.txt
Got similar issue with observables leading to high CPU consomption. This was introduced while upgrading from 4.1.4 to 4.1.5 Got thousands of error in thehive log as follow
[error] o.t.s.t.TraversalOps [|30559c0a] Observable 666759240 doesn't comply with its schema, field relatedId is missing:
v[666759240]
- _label = Observable
- _createdBy = system@thehive.local
- _createdAt = Fri Mar 05 00:22:07 UTC 2021
- tlp = 0
- ioc = false
- sighted = false
- message =
Same to us: after upgrading from 4.1.4 to 4.1.5 we got thousands of errors in thehive log:
thehive | [error] o.t.s.t.TraversalOps [|5ddf12ff] Observable 97345712 doesn't comply with its schema, field dataType is missing: thehive | v[97345712] thehive | - _label = Observable thehive | - _createdBy = xxxxxxxxxxx@xxxxxx thehive | - _createdAt = Fri Jan 22 13:21:39 UTC 2021 thehive | - tlp = 2 thehive | - ioc = false thehive | - sighted = false
Hi, We encounter the same issue, old observables are listed in old cases but the data of the observable are missing. When try to add some new observable I got an error. I tried to reindex and trigger integrity check and I got this in application.log :
2021-06-25 11:19:35,258 [ERROR] from org.thp.scalligraph.traversal.TraversalOps in pool-14-thread-1 [|560e605f] Observable 246587560 doesn't comply with its schema, field dataType is missing:
v[246587560]
- _label = Observable
- _createdBy = xxxx@xxxx.xx
- _createdAt = Mon May 10 15:58:10 CEST 2021
- tlp = 2
- ioc = false
- sighted = true
- message =
- ignoreSimilarity = false
- organisationIds = 24648
- relatedId = 123318424
Hi, we're getting the same log entries as above, but we are able to add new observables to cases without error. It only seems to affect old cases, but it does seem to affect performance of TheHive greatly.
2021-07-16 22:01:54,675 [ERROR] from org.thp.scalligraph.traversal.TraversalOps in application-akka.actor.default-dispatcher-38 [|477c1d3e] Observable 1247633528 doesn't comply with its schema, field relatedId is missing:
v[1247633528]
- _label = Observable
- _createdBy = system@thehive.local
- _createdAt = Mon Feb 08 09:26:31 CET 2021
- tlp = 0
- ioc = false
- sighted = false
- message = created on 2020-10-22, seen by 2 orgs, same IP as 260 other domains
Hi, We encounter the same issue on 4.1.8 version. We got thousands of these errors in application.log:
Hi all, although @meelich and I can't really confirm it on our installation, based on the messages we see in the errors it feels a lot like the defect observables are all related to old alerts added via the MISP integration. Can someone confirm this observation?
Also quite odd is, that all of the observables seem to belong to really old alerts/cases, which were deleted from TheHive some time ago. Matching this, we are not able to query any of the observable IDs mentioned in the error messages via API.
Could it be, that there remain some stale/orphaned Observables in the database? How would we be able to verify this?
Hi all, I'm not using MISP. Still encounter same issue even after upgrading to 4.1.9-1. :
2021-07-28 14:18:50,517 [ERROR] from org.thp.scalligraph.traversal.TraversalOps in pool-14-thread-1 [|350d4f1d] Observable 204976280 doesn't comply with its schema, field dataType is missing:
v[204976280]
- _label = Observable
- _createdBy = xxx@xxx.xx
- _createdAt = Tue May 25 11:49:10 CEST 2021
- tlp = 2
- ioc = false
- sighted = true
- message =
- ignoreSimilarity = false
- organisationIds = 24648
- relatedId = 176208
And to add some info, when I try to add a tag somewhere I got this error :
2021-07-28 14:35:09,222 [WARN] from org.thp.scalligraph.utils.Retry in application-akka.actor.default-dispatcher-13 [00000071|06a84105] An error occurs (org.janusgraph.core.SchemaViolationException: Value [#000000] is not an instance of the expected data type for property key [colour] and cannot be converted. Expected: class java.lang.Integer, found: class java.lang.String), retrying (5)
2021-07-28 14:35:10,311 [ERROR] from org.thp.scalligraph.utils.Retry in application-akka.actor.default-dispatcher-13 [00000071|39aea537] An error occurs
org.janusgraph.core.SchemaViolationException: Value [#000000] is not an instance of the expected data type for property key [colour] and cannot be converted. Expected: class java.lang.Integer, found: class java.lang.String
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.verifyAttribute(StandardJanusGraphTx.java:578)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addProperty(StandardJanusGraphTx.java:769)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addProperty(StandardJanusGraphTx.java:754)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.addProperty(StandardJanusGraphTx.java:750)
at org.janusgraph.graphdb.vertices.AbstractVertex.property(AbstractVertex.java:152)
at org.janusgraph.core.JanusGraphVertex.property(JanusGraphVertex.java:72)
at org.janusgraph.core.JanusGraphVertex.property(JanusGraphVertex.java:33)
at org.thp.scalligraph.models.SingleMapping.setProperty(Mapping.scala:189)
at org.thp.thehive.models.Tag$$anon$1.create(Tag.scala:10)
at org.thp.thehive.models.Tag$$anon$1.create(Tag.scala:10)
at org.thp.scalligraph.janus.JanusDatabase.createVertex(JanusDatabase.scala:456)
at org.thp.scalligraph.services.VertexSrv.createEntity(VertexSrv.scala:40)
at org.thp.thehive.services.TagSrv.$anonfun$createFreeTag$1(TagSrv.scala:74)
at scala.util.Success.flatMap(Try.scala:251)
at org.thp.thehive.services.TagSrv.createFreeTag(TagSrv.scala:73)
at org.thp.thehive.services.TagSrv.$anonfun$getOrCreate$3(TagSrv.scala:68)
at scala.util.Failure.orElse(Try.scala:224)
at org.thp.thehive.services.TagSrv.$anonfun$getOrCreate$2(TagSrv.scala:68)
at scala.Option.fold(Option.scala:251)
at org.thp.thehive.services.TagSrv.getOrCreate(TagSrv.scala:69)
at org.thp.thehive.services.CaseTemplateSrv.$anonfun$updateTags$1(CaseTemplateSrv.scala:96)
at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:17)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:141)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
at org.thp.scalligraph.package$RichSeq.toTry(package.scala:16)
at org.thp.thehive.services.CaseTemplateSrv.updateTags(CaseTemplateSrv.scala:96)
at org.thp.thehive.controllers.v0.PublicCaseTemplate.$anonfun$publicProperties$8(CaseTemplateCtrl.scala:127)
at scala.util.Success.flatMap(Try.scala:251)
at org.thp.thehive.controllers.v0.PublicCaseTemplate.$anonfun$publicProperties$7(CaseTemplateCtrl.scala:127)
at org.thp.scalligraph.query.PropertyUpdater$$anonfun$apply$11$$anon$1.apply(PublicProperty.scala:121)
at org.thp.scalligraph.query.PropertyUpdater$$anonfun$apply$11$$anon$1.apply(PublicProperty.scala:118)
at org.thp.scalligraph.services.VertexSrv.$anonfun$update$4(VertexSrv.scala:66)
at org.thp.scalligraph.package$RichSeq.$anonfun$toTry$3(package.scala:17)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at org.thp.scalligraph.package$RichSeq.toTry(package.scala:16)
at org.thp.scalligraph.services.VertexSrv.$anonfun$update$2(VertexSrv.scala:66)
at scala.Option.fold(Option.scala:251)
at org.thp.scalligraph.services.VertexSrv.update(VertexSrv.scala:63)
at org.thp.thehive.services.CaseTemplateSrv.super$update(CaseTemplateSrv.scala:84)
at org.thp.thehive.services.CaseTemplateSrv.$anonfun$update$1(CaseTemplateSrv.scala:84)
at org.thp.thehive.services.AuditSrv.mergeAudits(AuditSrv.scala:84)
at org.thp.thehive.services.CaseTemplateSrv.update(CaseTemplateSrv.scala:84)
at org.thp.scalligraph.services.VertexSrv.update(VertexSrv.scala:52)
at org.thp.thehive.controllers.v0.CaseTemplateCtrl.$anonfun$update$2(CaseTemplateCtrl.scala:66)
at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$authTransaction$2(Entrypoint.scala:77)
at org.thp.scalligraph.janus.JanusDatabase.$anonfun$tryTransaction$7(JanusDatabase.scala:241)
at scala.util.Try$.apply(Try.scala:213)
at org.thp.scalligraph.janus.JanusDatabase.$anonfun$tryTransaction$6(JanusDatabase.scala:241)
at org.thp.scalligraph.utils.DelayRetry$$anonfun$withTry$1.$anonfun$applyOrElse$10(Retry.scala:97)
at org.thp.scalligraph.utils.DelayRetry.org$thp$scalligraph$utils$DelayRetry$$runSync(Retry.scala:110)
at org.thp.scalligraph.utils.DelayRetry.org$thp$scalligraph$utils$DelayRetry$$runSync(Retry.scala:116)
at org.thp.scalligraph.utils.DelayRetry.org$thp$scalligraph$utils$DelayRetry$$runSync(Retry.scala:116)
at org.thp.scalligraph.utils.DelayRetry.org$thp$scalligraph$utils$DelayRetry$$runSync(Retry.scala:116)
at org.thp.scalligraph.utils.DelayRetry.org$thp$scalligraph$utils$DelayRetry$$runSync(Retry.scala:116)
at org.thp.scalligraph.utils.DelayRetry$$anonfun$withTry$1.$anonfun$applyOrElse$9(Retry.scala:97)
at scala.util.Try$.apply(Try.scala:213)
at org.thp.scalligraph.utils.DelayRetry$$anonfun$withTry$1.applyOrElse(Retry.scala:97)
at org.thp.scalligraph.utils.DelayRetry$$anonfun$withTry$1.applyOrElse(Retry.scala:93)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at scala.util.Failure.recoverWith(Try.scala:236)
at org.thp.scalligraph.utils.DelayRetry.withTry(Retry.scala:93)
at org.thp.scalligraph.janus.JanusDatabase.tryTransaction(JanusDatabase.scala:238)
at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$authTransaction$1(Entrypoint.scala:77)
at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$auth$1(Entrypoint.scala:86)
at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$asyncAuth$3(Entrypoint.scala:107)
at org.scalactic.Good.fold(Or.scala:1229)
at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$asyncAuth$2(Entrypoint.scala:107)
at org.thp.scalligraph.DiagnosticContext$.$anonfun$withRequest$2(ContextPropagatingDisptacher.scala:102)
at org.thp.scalligraph.DiagnosticContext$.saveDiagnosticContext(ContextPropagatingDisptacher.scala:108)
at org.thp.scalligraph.DiagnosticContext$.withRequest(ContextPropagatingDisptacher.scala:99)
at org.thp.scalligraph.controllers.Entrypoint$EntryPointBuilder.$anonfun$asyncAuth$1(Entrypoint.scala:107)
at org.thp.scalligraph.auth.SessionAuthSrv$$anon$1.$anonfun$invokeBlock$2(SessionAuthSrv.scala:100)
at scala.Option.fold(Option.scala:251)
at org.thp.scalligraph.auth.SessionAuthSrv$$anon$1.invokeBlock(SessionAuthSrv.scala:98)
at org.thp.scalligraph.auth.SessionAuthSrv$$anon$1.invokeBlock(SessionAuthSrv.scala:95)
at play.api.mvc.ActionBuilder$$anon$10.$anonfun$invokeBlock$2(Action.scala:408)
at play.api.mvc.ActionBuilderImpl.invokeBlock(Action.scala:441)
at play.api.mvc.ActionBuilderImpl.invokeBlock(Action.scala:439)
at play.api.mvc.ActionBuilder$$anon$10.invokeBlock(Action.scala:408)
at play.api.mvc.ActionBuilder$$anon$10.invokeBlock(Action.scala:404)
at play.api.mvc.ActionBuilder$$anon$9.apply(Action.scala:379)
at play.api.mvc.Action.$anonfun$apply$4(Action.scala:82)
at play.api.libs.streams.StrictAccumulator.$anonfun$mapFuture$4(Accumulator.scala:168)
at scala.util.Try$.apply(Try.scala:213)
at play.api.libs.streams.StrictAccumulator.$anonfun$mapFuture$3(Accumulator.scala:168)
at scala.Function1.$anonfun$andThen$1(Function1.scala:57)
at scala.Function1.$anonfun$andThen$1(Function1.scala:57)
at scala.Function1.$anonfun$andThen$1(Function1.scala:57)
at play.api.libs.streams.StrictAccumulator.run(Accumulator.scala:200)
at play.core.server.AkkaHttpServer.$anonfun$runAction$4(AkkaHttpServer.scala:418)
at akka.http.scaladsl.util.FastFuture$.strictTransform$1(FastFuture.scala:41)
at akka.http.scaladsl.util.FastFuture$.$anonfun$transformWith$3(FastFuture.scala:51)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$2(ContextPropagatingDisptacher.scala:57)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.thp.scalligraph.DiagnosticContext$.$anonfun$withDiagnosticContext$2(ContextPropagatingDisptacher.scala:93)
at org.thp.scalligraph.DiagnosticContext$.saveDiagnosticContext(ContextPropagatingDisptacher.scala:108)
at org.thp.scalligraph.DiagnosticContext$.withDiagnosticContext(ContextPropagatingDisptacher.scala:91)
at org.thp.scalligraph.DiagnosticContext$$anon$2.withContext(ContextPropagatingDisptacher.scala:76)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$1(ContextPropagatingDisptacher.scala:57)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
2021-07-28 14:35:10,314 [WARN] from org.thp.scalligraph.ErrorHandler in application-akka.actor.default-dispatcher-13 [00000071|39aea537] PATCH /api/case/template/~28744 returned 400
How can I force a schema upgrade ?
Regards.
Hm, considering @boulabytes is not using MISP at all, the connection to mainly MISP related events on our end might be a coincidence. :-/ Really hoped for a clue here.
One other side effect we noticed, due to our restart hourly "workaround", is that sometimes TheHive gets stuck during shutdown and is eating up all the CPU resources of its host until manually killed:
2021-08-01 06:00:09,580 [INFO] from akka.cluster.singleton.ClusterSingletonManager in application-akka.actor.default-dispatcher-50 [|] Singleton actor [akka://application/user/misp-actor-singleton/singleton] was
terminated
2021-08-01 06:00:09,580 [INFO] from akka.cluster.singleton.ClusterSingletonManager in application-akka.actor.default-dispatcher-50 [|] Singleton actor [akka://application/user/integrityCheckSingletonManager/singl
eton] was terminated
2021-08-01 06:00:09,580 [INFO] from akka.cluster.singleton.ClusterSingletonManager in application-akka.actor.default-dispatcher-50 [|] Singleton actor [akka://application/user/flowSingletonManager/singleton] was
terminated
2021-08-01 06:00:09,581 [INFO] from akka.cluster.singleton.ClusterSingletonManager in application-akka.actor.default-dispatcher-25 [|] Singleton actor [akka://application/system/singletonManagerJanusGraphClusterL
eader/JanusGraphClusterLeader] was terminated
2021-08-01 06:00:09,582 [INFO] from akka.cluster.singleton.ClusterSingletonManager in application-akka.actor.default-dispatcher-24 [|] Singleton actor [akka://application/system/singletonManagerCaseNumberLeader/C
aseNumberLeader] was terminated
2021-08-01 06:00:09,588 [INFO] from akka.cluster.Cluster in application-akka.actor.default-dispatcher-24 [|] Cluster Node [akka://application@127.0.0.1:33995] - Exiting completed
2021-08-01 06:00:09,590 [INFO] from akka.cluster.Cluster in application-akka.actor.default-dispatcher-24 [|] Cluster Node [akka://application@127.0.0.1:33995] - Shutting down...
2021-08-01 06:00:09,591 [INFO] from akka.cluster.Cluster in application-akka.actor.default-dispatcher-27 [|] Cluster Node [akka://application@127.0.0.1:33995] - Successfully shut down
2021-08-01 06:00:09,591 [INFO] from org.thp.thehive.ClusterListener in application-akka.actor.default-dispatcher-51 [|] Member is Removed: akka://application@127.0.0.1:33995 after Exiting
2021-08-01 06:00:09,595 [INFO] from play.core.server.AkkaHttpServer in application-akka.actor.internal-dispatcher-64 [|] Running provided shutdown stop hooks
...
Manual kill of the process here
...
2021-08-02 08:45:22,739 [INFO] from org.thp.scalligraph.ScalligraphModule in main [|] Loading scalligraph module
2021-08-02 08:45:24,520 [INFO] from akka.event.slf4j.Slf4jLogger in application-akka.actor.default-dispatcher-4 [|] Slf4jLogger started
2021-08-02 08:45:24,885 [INFO] from akka.remote.artery.tcp.ArteryTcpTransport in application-akka.actor.default-dispatcher-4 [|] Remoting started with transport [Artery tcp]; listening on address [akka://applicat
ion@127.0.0.1:43421] with UID [6856795535282250772]
2021-08-02 08:45:24,902 [INFO] from akka.cluster.Cluster in application-akka.actor.default-dispatcher-4 [|] Cluster Node [akka://application@127.0.0.1:43421] - Starting up, Akka version [2.6.10] ...
@nadouani Hate to bump this bug, but since it is really starting to break our operations more than a bit and affecting multiple people, any advise how we can help to investigate this any further? Is there any way to check the database for consistency, orphaned objects or such or any way to force a schema upgrade like @boulabytes suggested? I even tried to look into Scalligraph to check the database on my own but felt too lost quite fast. x-)
Best regards, Sebastian
Same Bug here
Work Environment
Problem Description
After the migration to from TheHive 4.0.5 to 4.1.1 the instance is not reachable.
Steps to Reproduce
After starting the instance TheHive is not reachable but the container is running:
Checking logs with "docker-compose logs thehive" I got the following lines:
Attaching to thehive