Open InigoGastesi opened 4 days ago
@InigoGastesi Thank you for trying out Zilla! There is a few things that could be causing the issue you are seeing.
You aren't using the kafka-service
address that you set up. Try changing your bootstrap server to kafka-service:29092
in your zilla.yaml file:
# Connect to Kafka
south_kafka_client:
type: kafka
kind: client
options:
servers:
- kafka-service:29092
exit: south_tcp_client
double check that you have created the kafka topics that you have defined in your config:
mqtt-sessions
mqtt-retained
mqtt-messages
gps_data
metocean_data
and added your new topics to the list of bootstrapped
topics:
south_kafka_cache_server:
type: kafka
kind: cache_server
options:
bootstrap:
- mqtt-messages
- mqtt-retained
- gps_data
- metocean_data
exit: south_kafka_client
You can leave out the route with :authority: 192.168.18.107:7114
header since that IP might be different in the pod. routing by the protocol is usually enough unless you need more specific address-based routing.
# HTTP server to handle HTTP connections
north_http_server:
type: http
kind: server
routes:
- when:
- headers:
:scheme: http
exit: north_http_kafka_mapping
I have managed to solve a problem by changing these two things in the Kafka configuration. The main problem was in the listeners configuration, specifically in the EXTERNAL section. After making adjustments in the following lines, the problem was solved
- name: KAFKA_CFG_LISTENERS
value: "INTERNAL://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094"
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: "INTERNAL://kafka-service:9092,EXTERNAL://kafka-service:9094"
And I changed these configuration from zilla.yaml
south_kafka_client:
type: kafka
kind: client
options:
servers:
- kafka-service.optimar.svc.cluster.local:9094
exit: south_tcp_client
Thanks for your answer :)
I have another question. Why when zilla is connected to kafka and it is working perfectly, but if I restart the kafka pod, zilla gets stuck? On the other hand, is there any way to have data persistence with zilla until the connection to Kafka comes back?
@InigoGastesi, Can you double-check that you are recreating all of the topics correctly after restarting the Kafka pod?
You were right, I wasn't creating all topics. I have another question, if the connection with Kafka is lost and an IoT device continues sending data to the Zilla proxy, when the connection with Kafka returns, will all the data created until said connection returns be sent to Kafka?
Describe the bug I am using Zilla in a Kubernetes environment, installed via Helm. My goal is to use Zilla to connect MQTT messages to Kafka. I have Kafka running in the same Kubernetes cluster, but when I send an MQTT message to Zilla, the process gets stuck at the publish.multiple command from the Paho library.
To Reproduce my kafka deployment configuration:
my zilla producer
Expected behavior The expected behavior is either to return an error when sending messages or for the messages to be successfully sent, but the process should not get stuck.
Zilla Environment: Describe a k8s pod:
Attach the
zilla.yaml
config file:Client Environment: The client environment is a pod with python inside. Nothing special