Closed ahlongas07 closed 1 year ago
Hello @ahlongas07,
Thanks for taking the time to report this behaviour/inquiry and doing some testing with alternatives to have a comparison point (more issues like this 🎉 ).
I tested our implementation with some of our platforms and couldn't replicate the issue (multiple users, steps, embedded resources, and so on) but, maybe it has to do with the configurations (maybe is not jetty).
Would you like to give us some information about the script so we can try it out? If the data is sensible (and you don't want to share it with the community) you can also reach us by email (ricardo.poleo@blazemeter.com).
Let us know what do you think,
Regards
Hi, I've observed similar issue to the original posted. Depending on machine I've used jvm kind of stack at ~150-300rps (400-500 opened connections; 1000-1500 threads)
regarding some observations:
Here how the report for 1.5K treads look like
main response degradation observed on the requests where new iteration started (i.e. new connection opened)
It's not the server throughput limit, as I was able to reach much more rps using several load generators. However there can be some throttling mechanism to limit traffic from single client. Can't tell if this is true.. At least this is definitely not IP based, as I was able to get much higher RPS and opened connections values with K6 from the same load generator.
Also here how does the profiler look like when I start getting real high responses (up to several sec and more):
Only ~13% cpu is spent on 'real' work. Vast majority is related to some native calls. (overall CPU at that level is still no more than 20-25%).
Active Thread count at this level is ~4-6K.
Xmx - 18Gb out of 23Gb total.
Xss - 512Kb
Tried also reduce Xmx to 8Gb and remove Xss limit. This hasn't change situation much..
Note: K6 uses single connection per VU and keep using it even with a new thread iteration. While Jmeter open a new connection per each thread of loop iteration. (The only option I found for Jmeter to reuse connection is to use default "Thread Group" and check 'Same user on each iteration' for it. But I can't run test with such configuration to see it this solve the throughput issue itself)
depending on what TG to use problem may or may not occur.
so with default TG and 'same user on each iteration' selected only one connection is opened per VU (main thread).
otherwise (at least for 'ultimate thread group') - a new connection and bunch of httpClient@... threads (10 if I'm not mistaken) are opened on each new iteration (within TG, Loop controller, While controller etc.)
This additional complexity results in higher amount of thread/connection creation. In our case Thread.start() took 60-70% of Jmeter's processSampler. That results in really high response and thus lower throughput. And all this with quite low CPU utilization.
disabling above mentioned closeConnections() - improves overall throughput a lot.
One more open question - is number of opened httpClient@... threads per each VU's thread. According to profiler - those are mainly idle. I suppose this is some jetty's feature. But I don't see a reason to have 10 opened threads per each user. In my case - 1.5K VUs - results in 15K active threads on JVM. 15K it's quite high number. At least from the memory usage perspective. So would be nice to reduce that either, if possible
Hi @ahlongas07 and @syampol
I made a pre-release with some changes in the plugin. https://github.com/Blazemeter/jmeter-http2-plugin/releases/tag/v2.0.2
This pre-release solve the problem some problems with handling connections, threads and memory handling. This version should work much better than the previous one.
Before its release, I need your feedback. Your analysis and feedback is very useful to us. Thanks.
Hi, thanks for reaching me out, unfortunately, the project finished and I don't have the way to test your change.
Regards,
Alejandro L
On Tue, Nov 15, 2022 at 10:05 AM David @.***> wrote:
Hi @ahlongas07 https://github.com/ahlongas07 and @syampol https://github.com/syampol
I made a pre-release with some changes in the plugin. https://github.com/Blazemeter/jmeter-http2-plugin/releases/tag/v2.0.2
This pre-release solve the problem some problems with handling connections, threads and memory handling. This version should work much better than the previous one.
Before its release, I need your feedback. Your analysis and feedback is very useful to us. Thanks.
— Reply to this email directly, view it on GitHub https://github.com/Blazemeter/jmeter-http2-plugin/issues/24#issuecomment-1315441403, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAROECUMQ4ODTWNHLVJJFM3WIORDDANCNFSM5OERJVQQ . You are receiving this because you were mentioned.Message ID: @.***>
-- Alejandro Longas Herrera.
Thanks @ahlongas07 and @syampol for all the provided information.
The final release 2.0.2 is here https://github.com/Blazemeter/jmeter-http2-plugin/releases/tag/v2.0.2 In hours it will be available in Plugin Manager.
I understand that @ahlongas07 will not be able to test the new version. We leave the release documented here awaiting a response from @syampol
Regards
Hi @3dgiordano Sorry but I have about the same situation.. Don't have a possibility play with a new version as I've switched to another project. Also it's hard to plan some activity when you are under chaotic power outage... Will try to check once have time for it.
Hi! We're playing with pre-release now. Rough comparison shows improvements.
Thanks @frale98
The final release that is already public in plugin manager. That version has some extra tweaks that the pre-release didn't have.
Thank you very much for sharing that you notice improvements compared to the previous version. Any find you can share with us will be welcome.
Very good news! We'll switch to latest official right away and keep you posted on any news (good or bad)!
Hi, @frale98 Any news with the new version?
Hello everyone,
We see no recent activity in this issue, so we are assuming all is good. I'll be closing the issue but, if you need more assistance regarding this behavior, please re-open it again.
Once again, thanks for taking the time.
Hi all and apologies for late reply. Plugin was extensively tested in our environment with no issues. We used 100 Threads maximum reaching a maximum throughput of 3000TPS for a single JMeter instance. Our setup mimics a Telco Core Network so there's no need of having Thousand of threads since Nodes are establishing the minimum number of HTTP2 connections to reach the requested load and HTTP2 (+ TLS) was introduces mainly to reduce connection overhead.
Hello, currently I'm testing an API using this plugin, our goal is reach 5000VU's, but when the injector reach 300 VU's start to face a problems due the concurrency. Reviewing the jmeter.log I saw this errors:
QueuedThreadPool: QueuedThreadPool[HttpClient@ccf2232]@5c7b55fd{STOPPING,8<=0<=200,i=7,r=-1,q=0}[NO_TRY] Couldn't stop Thread[HttpClient@ccf2232-152214,5,main]
o.e.j.i.ManagedSelector: Could not create EndPoint java.nio.channels.SocketChannel[closed]: org.eclipse.jetty.io.RuntimeIOException: javax.net.ssl.SSLHandshakeException
In order to discard an injector problem, I repeated the test using the native jmeter HTTP 1.1 sampler and follow the execution with visual vm, the injector works properly and reach up to 15000 req/seg.
To discard a problem from the app I repeated the tests using K6 that have native support to HTTP2 and the behavior was the same as JMeter and the native HTTP 1.1 sampler.
My understanding regarding the plug-in, is jetty works as a proxy and is performing the requests, so I think is flooded trying to process the requests from the threads.
My environment is JMeter 5.4.3 and Java 17 running on a c5.12xlarge.
Any advice about jetty or alpn library? Exist a way to tweak jetty?