Open nehanims opened 2 months ago
Factors to consider in both Event Driven and Request Resonse architecture:
https://www.youtube.com/watch?v=7fkS-18KBlw
Explore Kafka streams: https://www.youtube.com/watch?v=dsK-zd8pN_Q
Four types of EDA:
Designing event driven system: https://developer.confluent.io/courses/event-design/intro/
Kafka is fast because of the following 2 factors:
Setting up a Kafka cluster using KRaft mode locally for your application is a great choice for simplifying deployment since KRaft eliminates the need for ZooKeeper, making Kafka clusters easier to manage. Below is a step-by-step guide to configuring a Kafka cluster using KRaft mode locally:
Kafka requires Java to run. Make sure you have Java installed:
java -version
If Java is not installed, install the latest version of the JDK.
Download Kafka:
Alternatively, use wget
to download Kafka directly:
wget https://downloads.apache.org/kafka/3.6.0/kafka_2.13-3.6.0.tgz
Extract the downloaded file:
tar -xzf kafka_2.13-3.6.0.tgz
cd kafka_2.13-3.6.0
Create a Data Directory:
Kafka stores its logs and metadata in a data directory. Create this directory:
mkdir -p /tmp/kraft-combined-logs
Configure server.properties
:
Kafka includes a sample configuration file, server.properties
. Open it and make the following changes for KRaft mode:
nano config/kraft/server.properties
Add or modify the following properties:
# Cluster ID (You can generate one with the UUID command below)
process.roles=broker,controller
node.id=1
controller.quorum.voters=1@localhost:9093
listeners=PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093
log.dirs=/tmp/kraft-combined-logs
# Generate a new Cluster ID
Generate a Cluster ID:
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
Format the storage directory:
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
Start the Kafka Broker:
bin/kafka-server-start.sh config/kraft/server.properties
Create a Topic:
In a new terminal, navigate to the Kafka directory and create a new topic:
bin/kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
List Topics:
Verify that the topic was created:
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
Install Kafka CLI tools (optional):
kafka-console-producer.sh
and kafka-console-consumer.sh
) are sufficient for testing.Run a Producer:
Open a new terminal and run a producer:
bin/kafka-console-producer.sh --topic test-topic --bootstrap-server localhost:9092
Type some messages and hit Enter.
Run a Consumer:
In another terminal, run a consumer:
bin/kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server localhost:9092
You should see the messages produced earlier.
Kotlin Spring Boot Service:
Configure the Spring Kafka properties in your application.yml
or application.properties
:
spring:
kafka:
bootstrap-servers: localhost:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
Python Consumer Service:
Install the Kafka Python library:
pip install kafka-python
Use it in your Python code to consume messages from Kafka:
from kafka import KafkaConsumer
consumer = KafkaConsumer('test-topic', bootstrap_servers=['localhost:9092'])
for message in consumer:
print(f"Received message: {message.value.decode('utf-8')}")
Simulate Uploads:
Check Kafka Topics:
Validate Transcriptions and Metadata:
Log Monitoring:
Keep an eye on the Kafka logs for any issues:
tail -f logs/kafkaServer.out
Performance Tuning:
You now have a Kafka cluster running in KRaft mode locally, integrated with your application services for handling audio uploads, transcription, and metadata processing. This setup is fully open-source and should be sufficient for development and testing purposes.
use the topics in this article to ask questions https://www.udemy.com/course/spring-kafka-reactive/
CHATGPT Discussion about :
Kafka with Spring Boot
Modlarize app logic for better separation of concern