Closed ujwal-setlur closed 4 years ago
It doesn't seem to depend on whether i set this.hasBuiltInBalancer = true;
or not. This is what I see from kafka:
kafka | [2019-12-07 22:11:07,580] INFO [GroupCoordinator 1001]: Preparing to rebalance group api-1 in state PreparingRebalance with old generation 4 (__consumer_offsets-28) (reason: Adding new member default-kafka-consumer-13c03448-b6ab-445b-a1cd-5fc0217ba42e with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka | [2019-12-07 22:11:19,162] INFO [GroupCoordinator 1001]: Member default-kafka-consumer-35adefdc-4171-44e4-928b-b173c8e77b05 in group api-1 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
kafka | [2019-12-07 22:11:19,177] INFO [GroupCoordinator 1001]: Stabilized group api-1 generation 5 (__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
kafka | [2019-12-07 22:11:19,197] INFO [GroupCoordinator 1001]: Assignment received from leader for group api-1 for generation 5 (kafka.coordinator.group.GroupCoordinator)
kafka | [2019-12-07 22:11:49,238] INFO [GroupCoordinator 1001]: Member default-kafka-consumer-13c03448-b6ab-445b-a1cd-5fc0217ba42e in group api-1 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
kafka | [2019-12-07 22:11:49,238] INFO [GroupCoordinator 1001]: Preparing to rebalance group api-1 in state PreparingRebalance with old generation 5 (__consumer_offsets-28) (reason: removing member default-kafka-consumer-13c03448-b6ab-445b-a1cd-5fc0217ba42e on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
kafka | [2019-12-07 22:11:49,240] INFO [GroupCoordinator 1001]: Group api-1 with generation 6 is now empty (__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
the AMQP transporter for RabbitMQ seems to be most stable of them all! Looks like I will use RabbitMQ, but will need to use a separate eventstore then. Oh well, moleculer is still awesome!
the AMQP transporter for RabbitMQ seems to be most stable of them all! Looks like I will use RabbitMQ, but will need to use a separate eventstore then. Oh well, moleculer is still awesome!
So this is happening with other transporters also? Not only with Kafka? Can you create a repro repo?
This is happening with Kafka alone. I have other issues with the Stan transporter (my initial choice), but amqp transporter worked straight out of the box for me. I will try to create a repro repo either today or tomorrow
@AndreMaz sorry for the delay, but here is a reproduction repo:
I think the problem is that Kafka stores the internal protocol messages (like INFO packet) as well, and the new client receives the old packets from Kafka. We should check the topic settings in Kafka transporter and set that doesn't store these packets.
@icebob I agree with your theory. Essentially, when the node comes back up, it is getting its own messages.
Fixed. Could you test it with npm i moleculerjs/moleculer#next
?
@ujwal-setlur could you join our Discord chat? I would like to ask you about your project & experiences. Thanks in advance!
Sure, I think I joined a week or so ago
Joined
BTW, I will test the bug fix this week
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Current Behavior
I have set the NodeID in the configuration file, and am using the kafka transporter. When I initially start the service, everything is OK. I have turned off the load balancer, but the Kafka transporter says it doesn't have a built-in load balancer (really?), so the service broker load balancer is turned back on. When I restart the service from the REPL console by issuing
quit
and issuingnpm run dev
again, I get this error:Expected Behavior
Service should start up again.
Failure Information
Steps to Reproduce
Please provide detailed steps for reproducing the issue.
This doesn't happen with other transporters (amqp, stan)
Reproduce code snippet
Context
Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.
Failure Logs