Average Throughput [AutoMQ]: 372.36 MB/s
Average Throughput [Kafka]: 372.36 MB/s
Cost Estimate Rule: AutoMQ 800MB of storage corresponds to about 25 PUTs and 10 GETs.We have estimated that each GB corresponds to 31.25 PUTs and 12.5 GETs.Assuming a peak throughput of 0.5 GB/s and an average throughput of 0.01 GB/s, with data retention for 7 days, the data volume for 30 days(calculated with 7 days) is:7243600*0.01GB/s = 6048GB = 5.9T ≈ 6T
AutoMQ Benchmark VS. Result 🚀
Benchmark Info
Report Generated: 2024-06-01 07:31:18
Workload Configuration [AutoMQ]
name: 1-topic-1000-partitions-4kb-4p4c-500m topics: 1 partitionsPerTopic: 1000 partitionsPerTopicList: null randomTopicNames: true keyDistributor: NO_KEY messageSize: 4096 useRandomizedPayloads: false randomBytesRatio: 0 randomizedPayloadPoolSize: 0 payloadFile: payload/payload-4Kb.data subscriptionsPerTopic: 1 producersPerTopic: 4 producersPerTopicList: null consumerPerSubscription: 4 producerRate: 128000 producerRateList: null consumerBacklogSizeGB: 0 backlogDrainRatio: 1 testDurationMinutes: 1 warmupDurationMinutes: 0 logIntervalMillis: 100
Workload Configuration [Kafka]
name: 1-topic-1000-partitions-4kb-4p4c-500m topics: 1 partitionsPerTopic: 1000 partitionsPerTopicList: null randomTopicNames: true keyDistributor: NO_KEY messageSize: 4096 useRandomizedPayloads: false randomBytesRatio: 0 randomizedPayloadPoolSize: 0 payloadFile: payload/payload-4Kb.data subscriptionsPerTopic: 1 producersPerTopic: 4 producersPerTopicList: null consumerPerSubscription: 4 producerRate: 128000 producerRateList: null consumerBacklogSizeGB: 0 backlogDrainRatio: 1 testDurationMinutes: 1 warmupDurationMinutes: 0 logIntervalMillis: 100
Producer Configuration [AutoMQ]
value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer acks: all batch.size: 65536 bootstrap.servers: 10.0.0.82:9092 key.serializer: org.apache.kafka.common.serialization.StringSerializer linger.ms: 1
Producer Configuration [Kafka]
value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer acks: all batch.size: 65536 bootstrap.servers: 10.0.0.120:9092,10.0.1.103:9092 key.serializer: org.apache.kafka.common.serialization.StringSerializer linger.ms: 1
Consumer Configuration [AutoMQ]
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer value.deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer enable.auto.commit: true bootstrap.servers: 10.0.0.82:9092 auto.offset.reset: earliest
Consumer Configuration [Kafka]
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer value.deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer enable.auto.commit: true bootstrap.servers: 10.0.0.120:9092,10.0.1.103:9092 auto.offset.reset: earliest
Topic Configuration [AutoMQ]
min.insync.replicas: 2 retention.ms: 86400000 flush.messages: 1
Topic Configuration [Kafka]
min.insync.replicas: 2 retention.ms: 86400000
replicationFactor [AutoMQ]:
3
replicationFactor [Kafka]:
3
Replication Configuration
Average Throughput [AutoMQ]: 372.36 MB/s Average Throughput [Kafka]: 372.36 MB/s