Closed anandxp closed 5 years ago
I increased the number of pools in subsciber to 10 , still i am getting similar results.
@anandxp thanks... I did a quick run with your scenario (on my Laptop only). It worked.
Can you restart the MZBench server and retry?
To clear up the mqtt.publisher.qos1.puback.in.total
metric: all metrics are named from the perspective of the Load testing framework. So this means that your client received ('in') this number of QoS 1 Publish Acks.
@anandxp is this issue resolved for you? have you been able to run load tests with mzbench?
Hai @ioolkos !! Missed to update here then I am able to publish all messages , but unable to subscribe everything
Is there any network speed pre-requisite for running the same , inorder to verify that there is no data loss and subscriber is able to subscribe everything that's published.
@ioolkos Closing this issue as
Hi @ioolkos , Please consider this as a clarification rather than an issue. I really wonder whether this is a bug in our vernemq broker or i need to make any changes in my scenario , which is given below.
mqtt.message.consumed.total is 2278 whereas i published 30000 messages in mqtt.message.published.total
`make_install(git = "https://github.com/erlio/vmq_mzbench.git", branch = "master")
defaults("pool_size" = 100)
pool(size = 1, worker_type = mqtt_worker, worker_start = linear(100 rps)):
pool(size = numvar("pool_size"), worker_type = mqtt_worker, worker_start = linear(100 rps)):
Also i am confused about term mqtt.publisher.qos1.puback.in.total which is 29837 Didnt get detail about above value from https://satori-com.github.io/mzbench/dashboard/ as well