Open devilankur18 opened 3 years ago
Hi,
you can share the information with a pv that uses for example an azure file share.
We wanted the same feature, particularly splitting the user IDs across distributed load generators. As far as I know, k6-operator doesn't support that feature right now. We devised a different approach and it works well for us.
Instead of feeding the k6-operator a file with a bunch of username+password combos, we leverage k6's awesome Virtual User __VU
env var to keep it simple.
From the k6 docs, __VU
is:
"Current VU number. The value is assigned incrementally for each new VU instance. One-based."
So, every virtual user in your load test has a unique ID.
Here's our flow:
user1@mail.com, user2@mail.com, ... user9999@mail.com, user10000@mail.com
.user1
will have the password S3kr3tpassword1
, user2
will have S3kr3tpassword2
, etc. This will make sense in a minute.k6-operator-system
namespace (or whatever namespace you applied when deploying k6-operator to your cluster):kubectl -n k6-operator-system create secret generic tutorial-password --from-literal password-base=S3kr3tpassword
.spec.runner.env[]
pod configuration in your-k6-test.yaml
like so:spec:
runner:
env:
- name: USER_PASSWORD_BASE
valueFrom:
secretKeyRef:
name: "tutorial-password"
key: password-base
USER_PASSWORD_BASE
and the k6 __VU
env var to login to your app:import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
const vu = `${__VU}`;
const email = `user${vu}@mail.com`;
const password = `${__ENV.USER_PASSWORD_BASE}${vu}`;
console.log(`VU: ${vu} - email: ${email} - password: ${password}`);
const payload = JSON.stringify({ email: email, password: password });
const params = { headers: { 'Content-Type': 'application/json' } };
http.post('http://test.k6.io/login', payload, params);
// .. continue the user flow
sleep(1);
}
Hope this helps!
@mycargus The only thing I'll say, is you need the secret to be created in the same place as the job that is being run. Not specifically the namespace where the k6 operator is installed.
You can run jobs in lots of namespaces besides the k6-operator one.
@mycargus thanks for sharing your workaround!
This issue might be partially solved with the ongoing work done in https://github.com/grafana/k6/issues/1539 which is an addition of data segmentation API to k6. But the creation and management of secrets is specific for k6-operator use case so we need to evaluate whether some additional support for data segmentation is needed in k6-operator itself.
IOW, I'd consider this issue blocked until at least https://github.com/grafana/k6/pull/2273 is merged.
What else can we do for this? Can we add some identifier to identifier the worker node id?
Hi @testn, thanks for monitoring and commenting! This issue is still valid, AFAIK, and it looks like Data Segmentation in k6 remains valid too. However, as mentioned in this comment, it might make sense to try to use the execution API of k6 and see what it's pros and cons here are.
Can we add some identifier to identifier the worker node id?
I'm not sure what you meant here. The operator already has runner ID, attached as tags to the metrics.
Runner id is not enough, we need to know which node is supposed to handle 1st, 2nd, 3rd, etc. part of the data.
For example scirpt need to run have 100k users ids, so how to