Closed huntc closed 6 years ago
These messages aren't related to Kafka specifically... they just happen to be concurrently logged. I'll update the issue title accordingly.
To reproduce this, you need to create a chirp sometime before shutting down. Just starting runAll
and immediately stopping won't do it.
This actually happens in the HelloWorld example as well (no chirp needed 🐤 ).
When I enabled Kafka it no longer occurs and I am able to hot reload (just other issues):
[info] Loading global plugins from /Users/juliajacobs/.sbt/0.13/plugins
[info] Loading project definition from /Users/juliajacobs/IdeaProjects/activator-lagom-java-chirper-master/project
[info] Set current project to activator-lagom-java-chirper-master (in build file:/Users/juliajacobs/IdeaProjects/activator-lagom-java-chirper-master/)
> runAll
[info] Starting Kafka
[info] Starting Cassandra
.SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
................
[info] Cassandra server running at 127.0.0.1:4000
[info] Service locator is running at http://localhost:8000
[info] Service gateway is running at http://localhost:9000
[warn] c.l.l.i.p.c.ServiceLocatorSessionProvider - Could not find Cassandra contact points, due to: ServiceLocator is not bound
[warn] c.l.l.i.p.c.ServiceLocatorSessionProvider - Could not find Cassandra contact points, due to: ServiceLocator is not bound
[warn] c.l.l.i.p.c.ServiceLocatorSessionProvider - Could not find Cassandra contact points, due to: ServiceLocator is not bound
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: ServiceLocator is not bound
[info] Service activity-stream-impl listening for HTTP on 0:0:0:0:0:0:0:0:51855
[info] Service friend-impl listening for HTTP on 0:0:0:0:0:0:0:0:60399
[info] Service chirp-impl listening for HTTP on 0:0:0:0:0:0:0:0:54485
[info] Service front-end listening for HTTP on 0:0:0:0:0:0:0:0:57143
[info] Service load-test-impl listening for HTTP on 0:0:0:0:0:0:0:0:51796
[info] (Services started, press enter to stop and go back to the console...)
[error] activityservice - Exception in PathCallId{pathPattern='/api/activity/:userId/live'}
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
[info] Compiling 1 Java source to /Users/juliajacobs/IdeaProjects/activator-lagom-java-chirper-master/chirp-impl/target/scala-2.11/classes...
--- (RELOAD) ---
[error] a.a.OneForOneStrategy - Processor actor [Actor[akka://chirp-impl-application/user/StreamSupervisor-1/flow-10-1-unknown-operation#-246790301]] terminated abruptly
akka.stream.AbruptTerminationException: Processor actor [Actor[akka://chirp-impl-application/user/StreamSupervisor-1/flow-10-1-unknown-operation#-246790301]] terminated abruptly
[error] a.a.OneForOneStrategy - Processor actor [Actor[akka://chirp-impl-application/user/StreamSupervisor-1/flow-11-1-unknown-operation#-1045189735]] terminated abruptly
akka.stream.AbruptTerminationException: Processor actor [Actor[akka://chirp-impl-application/user/StreamSupervisor-1/flow-11-1-unknown-operation#-1045189735]] terminated abruptly
[error] a.a.OneForOneStrategy - Processor actor [Actor[akka://chirp-impl-application/user/StreamSupervisor-1/flow-9-1-unknown-operation#-1878967675]] terminated abruptly
akka.stream.AbruptTerminationException: Processor actor [Actor[akka://chirp-impl-application/user/StreamSupervisor-1/flow-9-1-unknown-operation#-1878967675]] terminated abruptly
@jewelsjacobs it's most likely coincidental that enabling Kafka made the errors go away. This app doesn't make use of Kafka at all. The "errors" noted originally in this issue happen because of timing inconsistencies during shutdown, but they are harmless.
Were you having problems hot reloading before enabling Kafka? If you can share the details in Gitter or on the mailing list, we can help figure out what the problem was.
The errors you're seeing now are also harmless. The nature of distributed microservices is that partial failure is "normal" and Lagom is designed to be able to recover. When different components of the system are starting asynchronously, and two of them need to communicate, one won't be able to know whether the other is available without trying to talk to it. If that fails, it handles the failure and tries again later. We don't want to suppress the errors entirely (after all, maybe it will just keep failing forever) so they are logged, but it's admittedly alarming to see these errors without explanation when just getting started. We will try to make these more comprehensible in the future.
This was resolved by updating to Lagom 1.4.
Kafka didn't take kindly to being shutdown upon terminating with
runAll
: