Closed asfimport closed 20 years ago
peter lin (migrated from Bugzilla): This isn't a bug of JMeter. It is a limitation of JVM implementation. What ever millisecond time the vm returns is the granularity JMeter uses. If you're willing to write a high performance timer, we will gladly use it. This is true of many languages and often require writing a timer in C++ to access system level timers. I know C# has the same problem. The default timer in C# has even less granularity than Java. I discovered while stress testing C# applications. There might be an existing timer that is released under BSD license, but I'm not aware of any.
Sebb (migrated from Bugzilla): It is still possible to use JMeter to stress test an application without knowing the exact times for each individual request.
e.g. by increasing the load to see when the throughput drops or the application under test crashes - or whatever.
peter lin (migrated from Bugzilla): I would second Sebastian statement. Most of the time, the transport of the pages decreases the throughput of the server. Therefore, unless your server happens to sit at a Level3 facility and all your users are using T3 connections or faster, the thread handling the request is going to have to wait until the response was completely transmitted. At that point, the granularity of the server's response is going to be less than a half second.
peter lin (migrated from Bugzilla): changing the status to invalid, since this is the default behavior of Java and the JVM. It is not a JMeter specific bug.
Bea Petrovicova (Bug 28541): JMeter can only measure methods in multiplies of 1MS in Linux and 10MS in WinXP ...
JMeter measures time between call begining and call end
Due the fact that durring stress testing number of queries migh run about several thousands of queries per second this resolution is unsufficient
This renders JMeter useless for stresstesting ...
Solution: Implement alternative timer with better resolution (1/100 of second or simmilar)
OS: other