Open na-- opened 4 years ago
I've increased the priority of this because its lack is causing other issues with k6. Trying to k6 archive
a user's 22k LoC script generated by the new HAR converter (so, one with plenty of needless response = http.verb(whatever)
repetition and no http.batch()
) didn't finish in the 11 minutes I waited for it to finish... :astonished:
Slightly editing that same script so it's suitable for --compatibility-mode=base
, k6 archive
finished in under a second! So, the previous long wait time was likely due to the Babel transpile. I didn't remember us having such issues with previous huge scripts, so I spent some time writing a bunch of regexes to transform the script to one that used http.batch()
, and then k6 archive
finished in 39 seconds.
So, http.batch()
support is important not only because it will make for more realistic load-tests and will allow us to slowly retire the built-in converter, the current output also causes k6 (or, rather, Babel) to choke on it.
The K6 convert
has been completely removed in K6 v0.48.0 released in Dec, 23. So, it would be good to prioritize this issue further in order to support script generation that mimics the application more accurately by adding http.batch()
to trigger multiple request in parallel as happens in modern browsers with HTTP2 protocol.
Moreover, The script generated by current version of har-to-k6
is expected to report high response time because of missing http.batch()
implementation. I believe, it would be good to report this to users in the form of warning when using har-to-k6
, until this issue is resolved.
I'm fallbacking to go version, as I need batching ;-(
We cannot deprecate the built-in
k6 convert
support until this tool outputs concurrent requests withhttp.batch()