apache / jmeter

Apache JMeter open-source load testing tool for analyzing and measuring the performance of a variety of services
https://jmeter.apache.org/
Apache License 2.0
8.42k stars 2.1k forks source link

Constant Throughput timer with shared algorithm generate wrong throughput when target throughput is big #6278

Open onionzz opened 6 months ago

onionzz commented 6 months ago

Expected behavior

No response

Actual behavior

No response

Steps to reproduce the problem

For example, 1、When I set target throughput under 30000(500TPS),everything works fine. But when the target throughput is set to 40000(333TPS), the result throughput is still 500TPS. 2、When the target throughput is set between 60000(1000TPS) and 150000(2500TPS),the result throughput is always 1000TPS 3、When the target throughput is set beyond 150000(2500TPS),the result throughput can't be controlled and wiil be a high value just like without Constant Throughput timer enabled

I think this is may be in ConstantThroughputTimer.java `private static final double MILLISEC_PER_MIN = 60000.0;

double msPerRequest = MILLISEC_PER_MIN / getThroughput();

Math.round(msPerRequest) ` I guess when the target throughput is set to a big value, the result of Math.round may produce a fixed value, and then the result throughput is a fixed value in different target throughput.

JMeter Version

5.6.3

Java Version

No response

OS Version

No response

FSchumacher commented 6 months ago

Does this happen with all shared modes? I would think, that it is most problematic, when calculateSharedDelay(ThroughputInfo, long) is used. The delay would be rounded too early there. That would be the modes with (shared).

FSchumacher commented 6 months ago

And another question is, how many threads had your thread group?

FSchumacher commented 6 months ago

After looking a bit deeper here, I think the resolution of milliseconds for the calculated delay is not enough, when we are trying a high throughput rate with a low thread count. For example, if we set 30,000 or 40,000 requests as a target with one thread and use active thread as the mode. Then the calculation for the two request/s targets would be:

30,000 => 60,000/30,000 = 2 => rounded to 2 40,000 => 60,000/40,000 = 1.5 => rounded to 2

It doesn't change, when we calculate the same with microseconds instead and still round at the end, as it would be:

30,000 => 60,000,000/30,000 = 2,000 => round to milliseconds => 2 40,000 => 60,000,000/40,000 = 1,500 => round to milliseconds => 2

Apart from this, it is probably still a good idea to change the resolution.

onionzz commented 6 months ago

Does this happen with all shared modes? I would think, that it is most problematic, when calculateSharedDelay(ThroughputInfo, long) is used. The delay would be rounded too early there. That would be the modes with (shared).

It happens with all shared modes. And I think you are right, the root cause is the delay rouned too early. Calculate with microseconds can't solve the problem thoroughly.

onionzz commented 6 months ago

After looking a bit deeper here, I think the resolution of milliseconds for the calculated delay is not enough, when we are trying a high throughput rate with a low thread count. For example, if we set 30,000 or 40,000 requests as a target with one thread and use active thread as the mode. Then the calculation for the two request/s targets would be:

30,000 => 60,000/30,000 = 2 => rounded to 2 40,000 => 60,000/40,000 = 1.5 => rounded to 2

It doesn't change, when we calculate the same with microseconds instead and still round at the end, as it would be:

30,000 => 60,000,000/30,000 = 2,000 => round to milliseconds => 2 40,000 => 60,000,000/40,000 = 1,500 => round to milliseconds => 2

Apart from this, it is probably still a good idea to change the resolution.

Use microseconds may be quite different when Math.max(now, nextRequestTime), and then the result of this will produce an effect on the delay. But calcuate with microseconds will still have the problem when the throughput rate is high enough.