Closed ecsimsw closed 1 year ago
The first thing to understand is that a topic partition is the unit of parallelism in Kafka. On both the producer and the broker side, writes to different partitions can be done fully in parallel. So expensive operations such as compression can utilize more hardware resources. On the consumer side, Kafka always gives a single partition’s data to one consumer thread. Thus, the degree of parallelism in the consumer (within a consumer group) is bounded by the number of partitions being consumed. Therefore, in general, the more partitions there are in a Kafka cluster, the higher the throughput one can achieve.
프로듀서 입장에서 4개의 프로듀서를 통해 각각 초당 10개의 메시지를 카프카의 토픽으로 보낸다고 하면, 카프카의 토픽에서 초당 40개의 메시지를 받아줘야 한다. 만약 해당 토픽에서 파티션을 1로 했을때 초당 10개의 메시지만 받아준다면 파티션을 4로 늘려서 목표 처리량을 처리할 수 있도록 변경한다. 하지만 카프카에서는 컨슈머도 있기때문에 컨슈머의 입장도 고려해야 한다. 컨슈머 입장에서 8개의 컨슈머를 통해 각각 초당 5개의 메시지를 카프카 토픽에서 가져올수 있다고 한다면, 해당 토픽의 파티션 수는 컨슈머 수와 동일하게 8개로 맞추어 컨슈머마다 각각의 파티션에 접근할 수 있게 해야 한다. (파티션보다 컨슈머가 많으면 단순히 기다리고만 있을테니)
또 줄이는 것은 불가능!
Kafka supports intra-cluster replication, which provides higher availability and durability. A partition can have multiple replicas, each stored on a different broker. One of the replicas is designated as the leader and the rest of the replicas are followers. Internally, Kafka manages all those replicas automatically and makes sure that they are kept in sync. Both the producer and the consumer requests to a partition are served on the leader replica. When a broker fails, partitions with a leader on that broker become temporarily unavailable. Kafka will automatically move the leader of those unavailable partitions to some other replicas to continue serving the client requests. This process is done by one of the Kafka brokers designated as the controller. It involves reading and writing some metadata for each affected partition in ZooKeeper. Currently, operations to ZooKeeper are done serially in the controller.
In the common case when a broker is shut down cleanly, the controller will proactively move the leaders off the shutting down broker one at a time. The moving of a single leader takes only a few milliseconds. So, from the clients perspective, there is only a small window of unavailability during a clean broker shutdown.
However, when a broker is shut down uncleanly (e.g., kill -9), the observed unavailability could be proportional to the number of partitions. Suppose that a broker has a total of 2000 partitions, each with 2 replicas. Roughly, this broker will be the leader for about 1000 partitions. When this broker fails uncleanly, all those 1000 partitions become unavailable at exactly the same time. Suppose that it takes 5 ms to elect a new leader for a single partition. It will take up to 5 seconds to elect the new leader for all 1000 partitions. So, for some partitions, their observed unavailability can be 5 seconds plus the time taken to detect the failure.
If one is unlucky, the failed broker may be the controller. In this case, the process of electing the new leaders won’t start until the controller fails over to a new broker. The controller failover happens automatically but requires the new controller to read some metadata for every partition from ZooKeeper during initialization. For example, if there are 10,000 partitions in the Kafka cluster and initializing the metadata from ZooKeeper takes 2 ms per partition, this can add 20 more seconds to the unavailability window.
https://www.confluent.io/blog/how-choose-number-topics-partitions-kafka-cluster/
https://12bme.tistory.com/528