kmg28801 / kafka-study

1 stars 0 forks source link

[실전 카프카 개발부터 운영까지] 2장. 카프카 환경 구성 #12

Open kmg28801 opened 1 year ago

seojeonghyeon commented 1 year ago

https://www.notion.so/seojeonghyeon0630/Chapter-2-Kafka-0f5011fb32f7459caf32f509cc19087d?pvs=4

seojeonghyeon commented 1 year ago

[실전 카프카 개발부터 운영까지] Chapter 2. Kafka 환경 구성

  1. AWS에서 Kafka Cluster 생성
  2. GKE에서 Kubernetes와 Helm Chart를 이용하여 Kafka Cluster 생성
  3. Local PC에서 Docker compose를 이용하여 Kafka Cluster 생성

1. AWS에서 Kafka Cluster 생성


스크린샷 2023-06-26 22 15 08

스크린샷 2023-06-26 22 13 51

*인스턴스 생성 시, Amazon Linux 2023 AMI에는 amazon-linux-extras가 존재하지 않음

//Amazon Linux 2023 AMI : amazon-linux-extras가 존재하지 않음
[root@ip-172-31-12-95 ~]# which amazon-linux-extras
/usr/bin/which: no amazon-linux-extras in (/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)

[root@ip-172-31-12-95 ~]# sudo yum repolist all
repo id                                                           repo name                                                                                                 status
amazonlinux                                                       Amazon Linux 2023 repository                                                                              enabled
amazonlinux-debuginfo                                             Amazon Linux 2023 repository - Debug                                                                      disabled
amazonlinux-source                                                Amazon Linux 2023 repository - Source packages                                                            disabled
kernel-livepatch                                                  Amazon Linux 2023 Kernel Livepatch repository                                                             enabled
kernel-livepatch-source                                           Amazon Linux 2023 Kernel Livepatch repository - Source packages                                           disabled

Ansible과 Git 설치 그리고 교재대로 따라가기 위한 GitHub Source code Clone

//Amazon Linux 2 Kernel 5.10 AMI 2.0.20230612.0 x86_64 HVM gp2
[root@ip-172-31-4-13 ~]# sudo amazon-linux-extras install -y ansible2

[root@ip-172-31-4-13 ~]# yum install -y git

[root@ip-172-31-4-13 ~]# git clone https://github.com/onlybooks/kafka2
Cloning into 'kafka2'...
remote: Enumerating objects: 302, done.
remote: Counting objects: 100% (40/40), done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 302 (delta 26), reused 25 (delta 23), pack-reused 262
Receiving objects: 100% (302/302), 27.58 MiB | 19.48 MiB/s, done.
Resolving deltas: 100% (74/74), done.

Hosts 등록

Host에 대한 이름 등록(각 host에 전부 설정 넣어줘야 한다)

(root 권한) vi /etc/hosts
172.31.14.74 peter-ansible.foo.bar peter-ansible
172.31.4.153 peter-kafka01.foo.bar peter-kafka01
172.31.8.90 peter-kafka02.foo.bar peter-kafka02
172.31.3.146 peter-kafka03.foo.bar peter-kafka03
172.31.7.116 peter-zk01.foo.bar peter-zk01
172.31.1.192 peter-zk02.foo.bar peter-zk02
172.31.0.225 peter-zk03.foo.bar peter-zk03

각 Kafka, Zookeeper Server에 Public Key 등록

  1. EC2에 Key Pair로 등록한 Private Key를 이용하여 Ansible Server에서 Public Key를 생성한다.
  2. 생성한 Public Key를 각 Kafka, Zookeeper Server에 등록한다.

Ansible Server에 Private Key 전달 및 Public Key 생성

zayden@Zaydenui-MacBookPro  ~/Documents/Work/kafka_demo  scp -i justin-keypair.pem justin-keypair.pem ec2-user@43.201.60.1:~
justin-keypair.pem                                                                                                                                100% 1678   222.4KB/s   00:00
 zayden@Zaydenui-MacBookPro  ~/Documents/Work/kafka_demo  ssh -i justin-keypair.pem ec2-user@43.201.60.1
Last login: Sun Jun 25 10:24:17 2023 from 122.32.65.230

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-172-31-14-74 ~]$ ll
합계 4
-rw------- 1 ec2-user ec2-user 1678  6월 25 10:24 justin-keypair.pem

[ec2-user@ip-172-31-14-74 ~]$ chmod 600 justin-keypair.pem
[ec2-user@ip-172-31-14-74 ~]$ ssh-agent bash
[ec2-user@ip-172-31-14-74 ~]$ ssh-add justin-keypair.pem
Identity added: justin-keypair.pem (justin-keypair.pem)
[ec2-user@ip-172-31-14-74 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ec2-user/.ssh/id_rsa.
Your public key has been saved in /home/ec2-user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:wwe5OB9xkr57u1DZQuf0Soc2TE5BKRzsJZBsQyYGlZM ec2-user@ip-172-31-14-74.ap-northeast-2.compute.internal
The key's randomart image is:
+---[RSA 2048]----+
|    .o+=+=.oo.   |
|     .Eo=o= o.   |
|       o*oo+=    |
|       + B.@ o   |
|      o S = X o  |
|       o * + +   |
|        +   .    |
|         o.      |
|        ..oo     |
+----[SHA256]-----+
[ec2-user@ip-172-31-14-74 ~]$ ll -a
합계 20
drwx------ 3 ec2-user ec2-user  121  6월 25 10:24 .
drwxr-xr-x 3 root     root       22  6월 25 10:22 ..
-rw------- 1 ec2-user ec2-user    8  6월 25 10:24 .bash_history
-rw-r--r-- 1 ec2-user ec2-user   18  7월 15  2020 .bash_logout
-rw-r--r-- 1 ec2-user ec2-user  193  7월 15  2020 .bash_profile
-rw-r--r-- 1 ec2-user ec2-user  231  7월 15  2020 .bashrc
drwx------ 2 ec2-user ec2-user   61  6월 25 10:26 .ssh
-rw------- 1 ec2-user ec2-user 1678  6월 25 10:24 justin-keypair.pem

생성한 Public Key를 각 Kafka, Zookeeper Server에 등록

[ec2-user@ip-172-31-14-74 ~]$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD+iYBS+I0WijPrCT2TpP/mAmDIz6+fVU60EEExr6Af69MlVI0Qj1Y2K6DXtSTX6QvikuibQduvWCd27Yvf9TzUGHG5r1YLYYliaIqHHsnNSAX4UmpC4M5FhJbTF0hMhaEfJAwMXvH+8Lnw8EX+Au/RxoTvGBw3eAc0AEyNsxJAe1FoCrczEncOEHWwdzVWjp52oAU2zcqgDDZaOkm4oEan1ofHL94+VyNENcRNCWIU20kPRwAWnbuI7PEF9q1YGgS+MHkqwwVFtYFmZyrfg9kFYZ+RwG3uYjFEM++yBJXreOl1tf1qX2386EYej1AICNl4wm4CeG2DYuiwlf3pff4N ec2-user@ip-172-31-14-74.ap-northeast-2.compute.internal

[ec2-user@ip-172-31-4-153 ~]$ vi .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCdCpBTSxfGUmliWYGBXRqrryldH0LfxOsvJSSFW6dzk0E6QBxU2CVzNQ8Gv1aOGmw182lpZFeZD++drcFoWOCT8Khqq9MAuYAOvg9pHNLHfFpD6c9Wq6StV9VfFGLK6LSpYaQIWXAL4hsErDBFxP8kJhkCbV0m5NcfB+bOk0YMAxyI2TVXvlItfHCgQX4YUh0J9shakDU9E5QRV1shzocy9Y98rLCcYBagBbJySceqBZU8m9eonhj+JmxuYs9Vtnh9XrMN5TELLF/TocmTj6C4yqNHbgqrbrsSKgFL6WBAEwfQ1gpMhTVrNdJRZTdjOanwo2vxEqk9sRxvMOTrUaTV justin-keypair
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD+iYBS+I0WijPrCT2TpP/mAmDIz6+fVU60EEExr6Af69MlVI0Qj1Y2K6DXtSTX6QvikuibQduvWCd27Yvf9TzUGHG5r1YLYYliaIqHHsnNSAX4UmpC4M5FhJbTF0hMhaEfJAwMXvH+8Lnw8EX+Au/RxoTvGBw3eAc0AEyNsxJAe1FoCrczEncOEHWwdzVWjp52oAU2zcqgDDZaOkm4oEan1ofHL94+VyNENcRNCWIU20kPRwAWnbuI7PEF9q1YGgS+MHkqwwVFtYFmZyrfg9kFYZ+RwG3uYjFEM++yBJXreOl1tf1qX2386EYej1AICNl4wm4CeG2DYuiwlf3pff4N ec2-user@ip-172-31-14-74.ap-northeast-2.compute.internal

[ec2-user@ip-172-31-14-74 ~]$ ssh ec2-user@172.31.4.153
The authenticity of host '172.31.4.153 (172.31.4.153)' can't be established.
ECDSA key fingerprint is SHA256:/zfg4CZy/5v4EoHLgfyfLToGp9uWeXSKizkitinDeSU.
ECDSA key fingerprint is MD5:ea:53:16:12:72:d0:dd:ac:dd:7b:4c:db:ae:0b:60:9c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.31.4.153' (ECDSA) to the list of known hosts.
Last login: Sun Jun 25 10:28:22 2023 from 122.32.65.230

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-172-31-4-153 ~]$

* 줄 바꿈하고 붙여서 넣을 것, 줄 바꿈 2번 시 인식 안됨

[ec2-user@ip-172-31-4-13 ~]$ cat /home/ec2-user/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDEO7wHit7jtlPbqHyNVO9/Q9aayaysXtvb7KowdtMa9Bdu+2YSQuaeyMttnUefzQ7qWlRK+oDdd5x9dlgHrtn6aAlj2u+NUQugu0SuTLkeUjlUxhRtaLQiky9c7Jpn9snqtXmTkiSaTWPYSGZS2YT4prhYdhZx0mKBwqkgAaDhx2rGxHNHZYM1pAds2H5Twgz6pqjogB4DHMsgIs2UA6LM4ecnq9HlreHQFvhYbG5smkf6XVccsnZUcFYAHwjhTEG62CQjIkntdAeRtZ+VwZImlXe4xGxgg36/pPjfSXa1RziIuLp41NisJmJnmbKsaGwsiLmJT/bOnf3ik9nkogmz ec2-user@ip-172-31-4-13.ap-northeast-2.compute.internal

**[ec2-user@ip-172-31-4-13 ~]$ vi /home/ec2-user/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCdCpBTSxfGUmliWYGBXRqrryldH0LfxOsvJSSFW6dzk0E6QBxU2CVzNQ8Gv1aOGmw182lpZFeZD++drcFoWOCT8Khqq9MAuYAOvg9pHNLHfFpD6c9Wq6StV9VfFGLK6LSpYaQIWXAL4hsErDBFxP8kJhkCbV0m5NcfB+bOk0YMAxyI2TVXvlItfHCgQX4YUh0J9shakDU9E5QRV1shzocy9Y98rLCcYBagBbJySceqBZU8m9eonhj+JmxuYs9Vtnh9XrMN5TELLF/TocmTj6C4yqNHbgqrbrsSKgFL6WBAEwfQ1gpMhTVrNdJRZTdjOanwo2vxEqk9sRxvMOTrUaTV justin-keypair

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDEO7wHit7jtlPbqHyNVO9/Q9aayaysXtvb7KowdtMa9Bdu+2YSQuaeyMttnUefzQ7qWlRK+oDdd5x9dlgHrtn6aAlj2u+NUQugu0SuTLkeUjlUxhRtaLQiky9c7Jpn9snqtXmTkiSaTWPYSGZS2YT4prhYdhZx0mKBwqkgAaDhx2rGxHNHZYM1pAds2H5Twgz6pqjogB4DHMsgIs2UA6LM4ecnq9HlreHQFvhYbG5smkf6XVccsnZUcFYAHwjhTEG62CQjIkntdAeRtZ+VwZImlXe4xGxgg36/pPjfSXa1RziIuLp41NisJmJnmbKsaGwsiLmJT/bOnf3ik9nkogmz ec2-user@ip-172-31-4-13.ap-northeast-2.compute.internal**

[ec2-user@ip-172-31-4-13 ~]$ chmod 600 .ssh/authorized_keys

[ec2-user@ip-172-31-4-13 ansible_playbook]$ ansible-playbook -i hosts zookeeper.yml

PLAY [zkhosts] *********************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************
**fatal: [peter-zk02.foo.bar]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname peter-zk02.foo.bar: Name or service not known", "unreachable": true}
fatal: [peter-zk01.foo.bar]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname peter-zk01.foo.bar: Name or service not known", "unreachable": true}
fatal: [peter-zk03.foo.bar]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname peter-zk03.foo.bar: Name or service not known", "unreachable": true}**

PLAY RECAP *************************************************************************************************************************************************************************
peter-zk01.foo.bar         : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
peter-zk02.foo.bar         : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
peter-zk03.foo.bar         : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0

Ansible을 이용한 Zookeeper, Kafka 설치

1. Zookeeper

[ec2-user@ip-172-31-14-74 ansible_playbook]$ ansible-playbook -i hosts zookeeper.yml

PLAY [zkhosts] *********************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************
The authenticity of host 'peter-zk03.foo.bar (172.31.0.225)' can't be established.
ECDSA key fingerprint is SHA256:XOHznTu67z6Mo59OlQ1cmhz25c3yl6aPI1L3TSJQYa8.
ECDSA key fingerprint is MD5:ee:01:ee:af:a3:56:9a:46:33:41:eb:1b:ec:f5:ad:5b.
Are you sure you want to continue connecting (yes/no)? [WARNING]: Platform linux on host peter-zk01.foo.bar is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could
change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [peter-zk01.foo.bar]
y[WARNING]: Platform linux on host peter-zk02.foo.bar is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could
change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [peter-zk02.foo.bar]

Please type 'yes' or 'no': yes
[WARNING]: Platform linux on host peter-zk03.foo.bar is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could
change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [peter-zk03.foo.bar]

TASK [common : Set timezone to Asia/Seoul] *****************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [common : install Java and tools] *********************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [common : copy krb5 conf] *****************************************************************************************************************************************************
ok: [peter-zk02.foo.bar]
ok: [peter-zk01.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [add the group zookeeper] *****************************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [add the user zookeeper] ******************************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
changed: [peter-zk03.foo.bar]
ok: [peter-zk02.foo.bar]

TASK [stop zookeeper-server] *******************************************************************************************************************************************************
fatal: [peter-zk03.foo.bar]: FAILED! => {"changed": false, "msg": "Could not find the requested service zookeeper-server: host"}
...ignoring
changed: [peter-zk01.foo.bar]
changed: [peter-zk02.foo.bar]

TASK [zookeeper : remove directory zk] *********************************************************************************************************************************************
changed: [peter-zk01.foo.bar]
changed: [peter-zk02.foo.bar]
ok: [peter-zk03.foo.bar]

TASK [make dir zookeeper] **********************************************************************************************************************************************************
changed: [peter-zk01.foo.bar]
changed: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [download zookeeper from web] *************************************************************************************************************************************************
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]
ok: [peter-zk01.foo.bar]

TASK [unarchive zookeeper] *********************************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [setup link zookeeper] ********************************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
changed: [peter-zk03.foo.bar]
ok: [peter-zk02.foo.bar]

TASK [copy zookeeper server conf files] ********************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [zookeeper : create myid] *****************************************************************************************************************************************************
changed: [peter-zk03.foo.bar]
changed: [peter-zk01.foo.bar]
changed: [peter-zk02.foo.bar]

TASK [zookeeper : change file ownership, group and permissions] ********************************************************************************************************************
changed: [peter-zk01.foo.bar]
changed: [peter-zk03.foo.bar]
changed: [peter-zk02.foo.bar]

TASK [copy zookeeper server in systemd] ********************************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

TASK [zookeeper : just force systemd to reload configs] ****************************************************************************************************************************
ok: [peter-zk01.foo.bar]
ok: [peter-zk03.foo.bar]
ok: [peter-zk02.foo.bar]

TASK [zookeeper : make sure a service is running] **********************************************************************************************************************************
changed: [peter-zk01.foo.bar]
changed: [peter-zk02.foo.bar]
changed: [peter-zk03.foo.bar]

PLAY RECAP *************************************************************************************************************************************************************************
peter-zk01.foo.bar         : ok=18   changed=6    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
peter-zk02.foo.bar         : ok=18   changed=6    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
peter-zk03.foo.bar         : ok=18   changed=14   unreachable=0    failed=0    skipped=0    rescued=0    ignored=1

2. Kafka

[ec2-user@ip-172-31-14-74 ansible_playbook]$ ansible-playbook -i hosts kafka.yml

PLAY [kafkahosts] ******************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************
The authenticity of host 'peter-kafka02.foo.bar (172.31.8.90)' can't be established.
ECDSA key fingerprint is SHA256:UBzUIcnlf84fZU8mKguHh8dn+J3uzlLo7VUBZ3TRH+c.
ECDSA key fingerprint is MD5:b8:6f:25:a3:90:bd:36:5b:13:9d:9a:d3:98:7e:6f:45.
Are you sure you want to continue connecting (yes/no)? [WARNING]: Platform linux on host peter-kafka01.foo.bar is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could
change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [peter-kafka01.foo.bar]
y[WARNING]: Platform linux on host peter-kafka03.foo.bar is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could
change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [peter-kafka03.foo.bar]

Please type 'yes' or 'no': yes
[WARNING]: Platform linux on host peter-kafka02.foo.bar is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could
change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [peter-kafka02.foo.bar]

TASK [common : Set timezone to Asia/Seoul] *****************************************************************************************************************************************
ok: [peter-kafka03.foo.bar]
ok: [peter-kafka01.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [common : install Java and tools] *********************************************************************************************************************************************
ok: [peter-kafka03.foo.bar]
ok: [peter-kafka01.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [common : copy krb5 conf] *****************************************************************************************************************************************************
ok: [peter-kafka03.foo.bar]
ok: [peter-kafka01.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [stop kafka-server] ***********************************************************************************************************************************************************
fatal: [peter-kafka02.foo.bar]: FAILED! => {"changed": false, "msg": "Could not find the requested service kafka-server: host"}
...ignoring
changed: [peter-kafka03.foo.bar]
changed: [peter-kafka01.foo.bar]

TASK [remove directory kafka] ******************************************************************************************************************************************************
changed: [peter-kafka01.foo.bar]
changed: [peter-kafka03.foo.bar]
ok: [peter-kafka02.foo.bar]

TASK [make dir kafka] **************************************************************************************************************************************************************
changed: [peter-kafka01.foo.bar]
changed: [peter-kafka03.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [download kafka from web] *****************************************************************************************************************************************************
ok: [peter-kafka03.foo.bar]
changed: [peter-kafka02.foo.bar]
ok: [peter-kafka01.foo.bar]

TASK [unarchive kafka] *************************************************************************************************************************************************************
changed: [peter-kafka03.foo.bar]
changed: [peter-kafka01.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [setup link kafka] ************************************************************************************************************************************************************
ok: [peter-kafka01.foo.bar]
ok: [peter-kafka03.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [copy kafka server conf files] ************************************************************************************************************************************************
changed: [peter-kafka01.foo.bar]
changed: [peter-kafka03.foo.bar]
changed: [peter-kafka02.foo.bar]

TASK [copy kafka conf file] ********************************************************************************************************************************************************
ok: [peter-kafka01.foo.bar] => (item=jmx)
ok: [peter-kafka03.foo.bar] => (item=jmx)
changed: [peter-kafka02.foo.bar] => (item=jmx)
changed: [peter-kafka01.foo.bar] => (item=connect-distributed.properties)
changed: [peter-kafka03.foo.bar] => (item=connect-distributed.properties)
changed: [peter-kafka02.foo.bar] => (item=connect-distributed.properties)

TASK [copy kafka server in systemd] ************************************************************************************************************************************************
ok: [peter-kafka01.foo.bar] => (item=kafka-server.service)
ok: [peter-kafka03.foo.bar] => (item=kafka-server.service)
changed: [peter-kafka02.foo.bar] => (item=kafka-server.service)
ok: [peter-kafka01.foo.bar] => (item=kafka-connect.service)
ok: [peter-kafka03.foo.bar] => (item=kafka-connect.service)
changed: [peter-kafka02.foo.bar] => (item=kafka-connect.service)

TASK [kafka : just force systemd to reload configs] ********************************************************************************************************************************
ok: [peter-kafka01.foo.bar]
ok: [peter-kafka03.foo.bar]
ok: [peter-kafka02.foo.bar]

TASK [kafka : make sure a service is running] **************************************************************************************************************************************
changed: [peter-kafka01.foo.bar]
changed: [peter-kafka03.foo.bar]
changed: [peter-kafka02.foo.bar]

PLAY RECAP *************************************************************************************************************************************************************************
peter-kafka01.foo.bar      : ok=15   changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
peter-kafka02.foo.bar      : ok=15   changed=11   unreachable=0    failed=0    skipped=0    rescued=0    ignored=1
peter-kafka03.foo.bar      : ok=15   changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Service 정상 여부 확인

1. Zookeeper

2. Kafka

Kafka Topic 생성 및 Message Producing, Consuming Test

1. Kafka Topic 생성

[ec2-user@ip-172-31-4-153 ~]$ /usr/local/kafka/bin/kafka-topics.sh --bootstrap-server peter-kafka01.foo.bar:9092 --create --topic peter-overview01 --partitions 1 --replication-factor 3
Created topic peter-overview01.

2. Kafka Message Producing, Consuming

[ec2-user@ip-172-31-4-153 ~]$ /usr/local/kafka/bin/kafka-console-producer.sh --bootstrap-server peter-kafka01.foo.bar:9092 --topic peter-overview01
>'23.06.25 First Message

[ec2-user@ip-172-31-8-90 ~]$ /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server peter-kafka02.foo.bar:9092 --topic peter-overview01
'23.06.25 First Message

스크린샷 2023-06-25 20 31 22

2. GKE에서 Kubernetes와 Helm Chart를 이용하여 Kafka Cluster 생성

스크린샷 2023-05-22 15 53 41


GKE 구성(e2에 CPU4core_RAM16GB 이상, asia-east1-a영역, Node 3개)

GKE Cluster 환경 설정

*너무 낮은 Spec의 Node는 Object들이 Resource 부족으로 실행되지 않을 수 있다.

Helm 그리고 Kafka 설치

1) Cloud Shell(Debian linux)을 통해 Helm 설치

autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100  1699  100  1699    0     0   5720      0 --:--:-- --:--:-- --:--:--  5720
OK
autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ sudo apt-get install apt-transport-https --yes
********************************************************************************
You are running apt-get inside of Cloud Shell. Note that your Cloud Shell  
machine is ephemeral and no system-wide change will persist beyond session end. 

To suppress this warning, create an empty ~/.cloudshell/no-apt-get-warning file.
The command will automatically proceed in 5 seconds or on any key. 

Visit https://cloud.google.com/shell/help for more information.                 
********************************************************************************
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
apt-transport-https is already the newest version (2.2.4).
The following package was automatically installed and is no longer required:
  libpcre2-posix2
Use 'sudo apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
deb https://baltocdn.com/helm/stable/debian/ all main
autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ sudo apt-get update
********************************************************************************
You are running apt-get inside of Cloud Shell. Note that your Cloud Shell  
machine is ephemeral and no system-wide change will persist beyond session end. 

To suppress this warning, create an empty ~/.cloudshell/no-apt-get-warning file.
The command will automatically proceed in 5 seconds or on any key. 

Visit https://cloud.google.com/shell/help for more information.                 
********************************************************************************
Hit:1 http://deb.debian.org/debian bullseye InRelease
Get:2 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]                                      
Get:3 https://download.docker.com/linux/debian bullseye InRelease [43.3 kB]                                  
Get:4 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]                            
Get:5 https://apt.releases.hashicorp.com bullseye InRelease [12.9 kB]                                        
Hit:6 https://repo.mysql.com/apt/debian bullseye InRelease                                                   
Hit:7 https://storage.googleapis.com/bazel-apt stable InRelease                                              
Get:8 https://download.docker.com/linux/debian bullseye/stable amd64 Packages [27.8 kB]                      
Get:9 http://deb.debian.org/debian-security bullseye-security/main Sources [318 kB]                          
Get:10 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages [304 kB]                  
Hit:11 https://apt.llvm.org/bullseye llvm-toolchain-bullseye-13 InRelease                                    
Get:12 https://apt.releases.hashicorp.com bullseye/main amd64 Packages [104 kB]                              
Ign:13 https://cli.github.com/packages bullseye InRelease                                                    
Get:14 https://packages.cloud.google.com/apt apt-transport-artifact-registry-stable InRelease [5,094 B]   
Get:15 https://packages.cloud.google.com/apt gcsfuse-bullseye InRelease [5,008 B]                            
Get:16 https://packages.cloud.google.com/apt cloud-sdk-bullseye InRelease [6,403 B]                          
Hit:13 https://cli.github.com/packages bullseye InRelease                                                    
Get:17 https://baltocdn.com/helm/stable/debian all InRelease [7,652 B]                                       
Get:18 https://packages.microsoft.com/debian/11/prod bullseye InRelease [3,629 B]                            
Get:19 https://baltocdn.com/helm/stable/debian all/main amd64 Packages [3,907 B]                             
Get:20 https://us-apt.pkg.dev/projects/demosite-images demosite-apt InRelease [1,094 B]                      
Get:21 https://packages.microsoft.com/debian/11/prod bullseye/main armhf Packages [13.9 kB]                  
Get:22 https://packages.microsoft.com/debian/11/prod bullseye/main amd64 Packages [85.4 kB]                  
Get:23 https://apt.postgresql.org/pub/repos/apt bullseye-pgdg InRelease [117 kB]                             
Get:24 https://packages.microsoft.com/debian/11/prod bullseye/main arm64 Packages [15.0 kB]                  
Get:25 https://packages.sury.org/php bullseye InRelease [6,841 B]                                            
Get:26 https://packages.sury.org/php bullseye/main amd64 Packages [376 kB]                                   
Fetched 1,549 kB in 8s (202 kB/s)                                                                            
Reading package lists... Done
autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ sudo apt-get install helm
********************************************************************************
You are running apt-get inside of Cloud Shell. Note that your Cloud Shell  
machine is ephemeral and no system-wide change will persist beyond session end. 

To suppress this warning, create an empty ~/.cloudshell/no-apt-get-warning file.
The command will automatically proceed in 5 seconds or on any key. 

Visit https://cloud.google.com/shell/help for more information.                 
********************************************************************************
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following package was automatically installed and is no longer required:
  libpcre2-posix2
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
  helm
0 upgraded, 1 newly installed, 0 to remove and 11 not upgraded.
Need to get 16.0 MB of archives.
After this operation, 50.6 MB of additional disk space will be used.
Get:1 https://baltocdn.com/helm/stable/debian all/main amd64 helm amd64 3.12.0-1 [16.0 MB]
Fetched 16.0 MB in 1s (18.5 MB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package helm.
(Reading database ... 142154 files and directories currently installed.)
Preparing to unpack .../helm_3.12.0-1_amd64.deb ...
Unpacking helm (3.12.0-1) ...
Setting up helm (3.12.0-1) ...
Processing triggers for man-db (2.9.4-2) ...
autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ helm version
version.BuildInfo{Version:"v3.9.3", GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", GitTreeState:"clean", GoVersion:"go1.17.13"}

2) Helm Repo 추가

autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

3) Helm을 통해 Kafka 설치(Broker = 3)

autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ helm install kafka-master --set replicaCount=3 bitnami/kafka -n messagequeue
NAME: kafka-master
LAST DEPLOYED: Sat May 27 01:55:40 2023
NAMESPACE: messagequeue
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 22.1.2
APP VERSION: 3.4.0

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka-master.messagequeue.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-master-0.kafka-master-headless.messagequeue.svc.cluster.local:9092
    kafka-master-1.kafka-master-headless.messagequeue.svc.cluster.local:9092
    kafka-master-2.kafka-master-headless.messagequeue.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-master-client --restart='Never' --image docker.io/bitnami/kafka:3.4.0-debian-11-r28 --namespace messagequeue --command -- sleep infinity
    kubectl exec --tty -i kafka-master-client --namespace messagequeue -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --broker-list kafka-master-0.kafka-master-headless.messagequeue.svc.cluster.local:9092,kafka-master-1.kafka-master-headless.messagequeue.svc.cluster.local:9092,kafka-master-2.kafka-master-headless.messagequeue.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --bootstrap-server kafka-master.messagequeue.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

[Kubernetes에 설치한 Kafka에서 Producer와 Consumer Test]# Kubernetes에 설치한 Kafka에서 Producer와 Consumer Test


Kafka Client에서 Producer와 Consumer Test

1. Kafka Client Pod 생성

autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ kubectl run kafka-master-client --restart='Never' --image docker.io/bitnami/kafka:2.7.0-debian-10-r109 --namespace messagequeue --command -- sleep infinity
pod/kafka-master-client created
autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ kubectl get pods -n messagequeue
NAME                  READY   STATUS    RESTARTS   AGE
kafka-master-0        1/1     Running   0          7m39s
kafka-master-1        1/1     Running   0          7m38s
kafka-master-2        1/1     Running   0          7m38s
kafka-master-client   1/1     Running   0          21s

2. Kafka Client에서 Producer 실행 및 Message 보내기

autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ kubectl exec --tty -i kafka-master-client --namespace es -- bash
I have no name!@kafka-master-client:/$ kafka-console-producer.sh --broker-list kafka-master:9092 --topic test
>hello
>20230520 16:52
>

3. Kafka Client에서 Consuer 실행 및 Message 받기

autoever_seojeonghyeon0630@cloudshell:~ (kubernetes-project-386708)$ kubectl exec --tty -i kafka-master-client --namespace es -- bash
I have no name!@kafka-master-client:/$ kafka-console-consumer.sh --bootstrap-server kafka-master:9092 --topic test --from-beginning
[2023-05-20 07:51:44,194] WARN [Consumer clientId=consumer-console-consumer-42208-1, groupId=console-consumer-42208] Error while fetching metadata with correlation id 2 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-20 07:51:44,299] WARN [Consumer clientId=consumer-console-consumer-42208-1, groupId=console-consumer-42208] Error while fetching metadata with correlation id 4 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
hello
20230520 16:52

2) Kafka-Topic 생성(Topic Name : kafkastudy-gkelog-transaction)

I have no name!@kafka-master-client:/$ kafka-topics.sh --bootstrap-server kafka-master:9092 --topic kafkastudy-gkelog-transaction --partitions 1 --replication-factor 3 --create                              
Created topic kafkastudy-gkelog-transaction.

I have no name!@kafka-master-client:/$ kafka-topics.sh --bootstrap-server kafka-master:9092 --topic kafkastudy-gkelog-transaction --describe
Topic: kafkastudy-gkelog-transaction    PartitionCount: 1       ReplicationFactor: 3    Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
        Topic: kafkastudy-gkelog-transaction    Partition: 0    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1

3) 데이터 인입 검증을 위한 환경

Shell을 하나 더 켜서 kafka-master-client를 준비하고 Consumer로 메세지가 정상적으로 들어오는 지 검증 가능하다.

3. Local PC에서 Docker Compose를 이용하여 Kafka Cluster 생성

참조 사이트

[Kafka] kafka cluster 실습 환경 구축

Docker-Desktop 설치

Download Docker Desktop | Docker

docker-compose.yml 생성 및 Container 생성 및 확인

1) docker-compose.yml

version: '3.7'
services: 

  zk1:
    container_name: zookeeper1
    image: wurstmeister/zookeeper:latest
    restart: always
    hostname: zk1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zk1:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
    volumes:
      - "~/zk-cluster/zk1/data:/data"

  zk2:
    container_name: zookeeper2
    image: wurstmeister/zookeeper:latest
    restart: always
    hostname: zk2
    ports:
      - "2182:2181"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
    volumes:
      - "~/zk-cluster/zk2/data:/data"

  zk3:
    container_name: zookeeper3
    image: wurstmeister/zookeeper:latest
    restart: always
    hostname: zk3
    ports:
      - "2183:2181"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
    volumes:
      - "~/zk-cluster/zk3/data:/data"

  kafka1:
    container_name: kafka1
    image: wurstmeister/kafka:latest
    restart: on-failure
    depends_on:
      - zk1
      - zk2
      - zk3
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: host.docker.internal
      BOOTSTRAP_SERVERS: host.docker.internal:9092, host.docker.internal:9093, host.docker.internal:9094
      KAFKA_ZOOKEEPER_CONNECT: "zk1:2181,zk2:2182,zk3:2183"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2

  kafka2:
    container_name: kafka2
    image: wurstmeister/kafka:latest
    restart: on-failure
    depends_on:
      - zk1
      - zk2
      - zk3
    ports:
      - "9093:9092"
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ADVERTISED_HOST_NAME: host.docker.internal
      BOOTSTRAP_SERVERS: host.docker.internal:9092, host.docker.internal:9093, host.docker.internal:9094
      KAFKA_ZOOKEEPER_CONNECT: "zk1:2181,zk2:2182,zk3:2183"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2

  kafka3:
    container_name: kafka3
    image: wurstmeister/kafka:latest
    restart: on-failure
    depends_on:
      - zk1
      - zk2
      - zk3
    ports:
      - "9094:9092"
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ADVERTISED_HOST_NAME: host.docker.internal
      BOOTSTRAP_SERVERS: host.docker.internal:9092, host.docker.internal:9093, host.docker.internal:9094
      KAFKA_ZOOKEEPER_CONNECT: "zk1:2181,zk2:2182,zk3:2183"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2

  kafka-ui:
    image: provectuslabs/kafka-ui
    container_name: kafka-ui
    ports:
      - "10000:8080"
    restart: always
    environment:
      - KAFKA_CLUSTERS_0_NAME=local
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=host.docker.internal:9092,host.docker.internal:9093,host.docker.internal:9094
      - KAFKA_CLUSTERS_0_ZOOKEEPER=zk1:2181,zk2:2182,zk1:2183

2) docker-compose 실행 및 Container 확인

zayden@Zaydenui-MacBookPro  ~  docker-compose -f docker-compose.yml up -d

zayden@Zaydenui-MacBookPro  ~  docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED              STATUS              PORTS                                                NAMES
bfac2b2cb9ec   wurstmeister/kafka:latest       "start-kafka.sh"         6 weeks ago          Up About a minute   0.0.0.0:9092->9092/tcp                               kafka1
2385fa9da658   wurstmeister/kafka:latest       "start-kafka.sh"         6 weeks ago          Up About a minute   0.0.0.0:9093->9092/tcp                               kafka2
da2a8dce0b6e   wurstmeister/kafka:latest       "start-kafka.sh"         6 weeks ago          Up About a minute   0.0.0.0:9094->9092/tcp                               kafka3
caede7610ab4   wurstmeister/zookeeper:latest   "/bin/sh -c '/usr/sb…"   6 weeks ago          Up About a minute   22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp   zookeeper1
e0b868730b8c   wurstmeister/zookeeper:latest   "/bin/sh -c '/usr/sb…"   6 weeks ago          Up About a minute   22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp   zookeeper2
6adbd54d93d6   wurstmeister/zookeeper:latest   "/bin/sh -c '/usr/sb…"   6 weeks ago          Up About a minute   22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp   zookeeper3
fdaa5d360e4e   provectuslabs/kafka-ui          "/bin/sh -c 'java --…"   6 weeks ago          Up About a minute   0.0.0.0:10000->8080/tcp                              kafka-ui
스크린샷 2023-06-17 13 57 07

3) Docker-desktop에서 확인

스크린샷 2023-06-17 13 58 14

4) Kafka-Ui에서 Kafka Cluster에 대한 정보를 확인

스크린샷 2023-06-17 13 58 53