Open GrayedFox opened 1 year ago
Hi, thanks for opening this issue.
Technically, this could be possible, but opening the door for metrics to be created dynamically within the VU scope could lead to an explosion of memory usage. I believe this is the reason it's currently forbidden, even though technically it could be allowed. Maybe someone else from @grafana/k6-core can chime in if there's another reason.
Given a
SharedArray
readonly struct , I'd like to be able to differentiate tests using data from said struct on a per iteration basis instead of just on a per VU basis.
Is there a reason you couldn't differentiate metric data using tags? This too could have an impact on memory, which is why internally we use the concept of metric metadata for high-cardinality values (this API is not exposed to JS). But if you only use relatively few different tags (in the order of, say, hundreds), then this should work for you.
Big reason why they are only created in the init context is so that k6 can verify that you only have thresholds on metrics that are actually defined.
I do agree with @imiric that tags
or metadata
will be the more valid solution at the moment for this. It will also play a lot better with other tools/outputs.
There is also now metadata JS API that hopefully will be released in v0.45.0 (~2 months from now)
Feature Description
Right now trying to init a Trend or other metric from within your exported test function will error.
This prevents test writers from being able to add dynamic trend names based on shared test data.
Given a
SharedArray
readonly struct , I'd like to be able to differentiate tests using data from said struct on a per iteration basis instead of just on a per VU basis.Suggested Solution (optional)
Allowing for trends to be created during script execution would be all that's needed - making each Trend mutable is beyond the scope of this request and likely goes against the core K6 principle of script reusability.
Basically, instead of having Metrics creatable only during the init context, and readable from both the Init and script execution contexts, I think that Metrics should be creatable throughout the script's life cycle as this is technically before reports are finalised anyways (and also before the post script handle summary hook).
Already existing or connected issues / PRs (optional)
No response