nats-io / nats-server

High-Performance server for NATS.io, the cloud and edge native messaging system.
https://nats.io
Apache License 2.0
15.27k stars 1.37k forks source link

Jetstream consumer rate affects message publishing rate #5637

Open AetheWu opened 2 weeks ago

AetheWu commented 2 weeks ago

Observed behavior

Benchmark the nats-server with the command:

nats bench  --js --pub 5 --size 1024 --msgs 1000000 --dedup --stream mqtt_publish-0 mqtt_publish --multisubject

bench result with no consumers:

Pub stats: 278,726 msgs/sec ~ 272.19 MB/sec
 [1] 56,359 msgs/sec ~ 55.04 MB/sec (200000 msgs)
 [2] 56,358 msgs/sec ~ 55.04 MB/sec (200000 msgs)
 [3] 56,090 msgs/sec ~ 54.78 MB/sec (200000 msgs)
 [4] 55,928 msgs/sec ~ 54.62 MB/sec (200000 msgs)
 [5] 55,745 msgs/sec ~ 54.44 MB/sec (200000 msgs)
 min 55,745 | avg 56,096 | max 56,359 | stddev 240 msgs

bench result with consumers:

Pub stats: 166,121 msgs/sec ~ 162.23 MB/sec
 [1] 34,181 msgs/sec ~ 33.38 MB/sec (200000 msgs)
 [2] 34,081 msgs/sec ~ 33.28 MB/sec (200000 msgs)
 [3] 33,448 msgs/sec ~ 32.66 MB/sec (200000 msgs)
 [4] 33,415 msgs/sec ~ 32.63 MB/sec (200000 msgs)
 [5] 33,224 msgs/sec ~ 32.45 MB/sec (200000 msgs)
 min 33,224 | avg 33,669 | max 34,181 | stddev 385 msgs

stream info:

Information for Stream mqtt_publish-0 created 2024-07-10 14:48:49

              Subjects: mqtt_publish.*.0
              Replicas: 1
               Storage: Memory

Options:

             Retention: WorkQueue
       Acknowledgments: true
        Discard Policy: Old
      Duplicate Window: 2m0s
            Direct Get: true
     Allows Msg Delete: true
          Allows Purge: true
        Allows Rollups: false

Limits:

      Maximum Messages: 100,000,000
   Maximum Per Subject: 100,000,000
         Maximum Bytes: 95 MiB
           Maximum Age: unlimited
  Maximum Message Size: unlimited
     Maximum Consumers: 50

State:

              Messages: 90,335
                 Bytes: 95 MiB
        First Sequence: 1,106,126 @ 2024-07-10 15:15:39 UTC
         Last Sequence: 1,196,460 @ 2024-07-10 15:15:41 UTC
      Active Consumers: 1
    Number of Subjects: 18,572

consumer info

Configuration:

                    Name: mqtt_publish-consumer
               Pull Mode: true
          Deliver Policy: All
              Ack Policy: Explicit
                Ack Wait: 30.00s
           Replay Policy: Instant
         Max Ack Pending: 5,000
       Max Waiting Pulls: 512
          Max Pull Batch: 500

State:

  Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
    Acknowledgment Floor: Consumer sequence: 0 Stream sequence: 0
        Outstanding Acks: 0 out of maximum 5,000
    Redelivered Messages: 0
    Unprocessed Messages: 0
           Waiting Pulls: 1 of maximum 512

nats config:

jetstream {
    store_dir: /Users/lethe/.config/nats/data
    max_mem: 1G
    max_file: 100G
}

accounts: {
    SYS: {
        users: [
            {user: admin, password: public}
        ]
    }
    APP: {
        jetstream: {
            max_memory: 1G
            max_filestore: 10G
            max_streams: 100
            max_consumers: 100
        }
        jetstream: enabled
        users: [
            {user: "fogcloud", password: "root"}
        ],
        mappings: {
          "mqtt_publish.*": "mqtt_publish.{{wildcard(1)}}.{{partition(5,1)}}",
          "mqtt_subscribe.*": "mqtt_subscribe.{{wildcard(1)}}.{{partition(5,1)}}",
          "event.server.*.*": "event.server.{{wildcard(1)}}.{{wildcard(2)}}.{{partition(5,2)}}",
          "mqtt_status.*": "mqtt_status.{{wildcard(1)}}.{{partition(5,1)}}"
        }
    }
}

Expected behavior

Consumer rate does not affect message publishing rate

Server and client version

nats-server: 2.10.17 nats-client: nats.go-v1.36.0

Host environment

os: macos 14

Steps to reproduce

No response

AetheWu commented 2 weeks ago

Is this a bug or a design issue?

AetheWu commented 2 weeks ago

the same , producers also affect consumer rates

yuzhou-nj commented 1 week ago

the same , producers also affect consumer rates

https://github.com/nats-io/nats-server/issues/5659 Look at this issue. Is it the same as the problem you mentioned?