Closed zzhao2010 closed 1 year ago
Have you tried the latest version of the dashboard?
It is better with update, but there is still issue:
With new dashboard:
but when I decrease time range, I've got other total reqs count
Total requests count changes depends on range, but never equals the actual http requests count:
Also p95 response time looks different.
@lagunkov Is it a test that you can share for me to replicate the incident? I'm in the k6 slack like @Wen
@zzhao2010 Do you still have the same problem?
Sorry I can not share test that produced screen above because it contains private info.
I tried to make reduced test case with example from https://test.k6.io/ with such options
14 export let options = {
15 scenarios: {
16 sample: {
17 executor: 'ramping-vus',
18 startVUs: 1,
19 stages: [
20 { target: 20, duration: "1m" },
21 { target: 20, duration: "3m" },
22 { target: 0, duration: "1m" }
23 ],
24 }
25 },
26 tags: {
27 testid: 'test grafana 0.1'
28 }
29 };
It has the same total request and p95 response time count when time range is "last 3 hours" here:
And when I choose shorter time range it transforms into:
Hope this will help.
🆗 , I let me check
@jwcastillo Looks like the issue was fixed with the latest version. @lagunkov btw, the issue you described above happens on my end as well.. Looks like the values would be messed up if we changed the timeframe. I always use the link in the test list dashboard to the test result dashboard. That way the data reporting would be accurate.
Hi, I have this issue also. I found that if the test duration is short, the Request Made metrics is correct, but when test duration go longer, the Request Made metrics will be less than the exact request made count. I did a comparison with the k6 Cloud. Here you can see the Request Made, Peak RPS, and the P95 Response Time also have different value.
Hi @zzhao2010, may I know how do you solve your issue? Coz I am also using v0.2.0 but still the same.
@soolch are you sure you're using the latest version? Did you pull the latest commit from the main
branch or from the latest tag? If yes, then can you post an anonymized script that allows us to reproduce your issue, please? You have an example a few lines before in this comment using test.k6.io
.
Hi @codebien, I have tried it once again, with the latest k6 binary, following the k6 documentation as it has updated that this is the official dashboard. But the issue still happens. I tried using the following option
export const options = {
scenarios: {
'scenario-vehicle-content': {
executor: 'ramping-arrival-rate',
startRate: 50,
timeUnit: '1m',
preAllocatedVUs: 2,
maxVUs: 50,
stages: [
{ target: 50, duration: '1m' },
{ target: 100, duration: '1m' },
{ target: 100, duration: '1m' },
{ target: 200, duration: '1m' },
{ target: 200, duration: '1m' },
{ target: 300, duration: '1m' },
{ target: 300, duration: '1m' },
{ target: 400, duration: '1m' },
{ target: 400, duration: '1m' },
],
},
},
};
But if i reduce my total test duration to 5m then the result shows correctly. For the following stage configuration, the result is correct.
stages: [
{ target: 50, duration: '1m' },
{ target: 100, duration: '1m' },
{ target: 100, duration: '1m' },
{ target: 200, duration: '1m' },
{ target: 200, duration: '1m' }
],
@jwcastillo can you take a look into it, please?
yes, I take this
Hi @jwcastillo, may I know are you able to simulate the same result at your side.
Hi @jwcastillo, would it be because of this https://k6.io/docs/results-output/real-time/prometheus-remote-write/#stale-trend-metrics
Hi @soolch, do you use the dashboard with the Stale marker option enabled?
Hi @codebien, I didn't. Just that i read this stale option which also say 5mins. And the when i try search the result in the promtheus, those that are more than 5mins will be disappeared, which cause the grafana result incorrect.
Firstly thanks for sharing these great dashboards for visualization. They look awesome. On the other hand, I saw strange behavior while I was testing the dashboards with my test cases, and I do have question about the data accuracy as the data reporting on the dashboards doesn't look to align with the test result on the command line.
Let's have the 1st metric "Request Made" on the "Test Result" dashboard as an example. There were 2 values reporting, which is quite confusing. And neither of these values reflected the accurate number of requests being generated over the test case. And if you take a look at the P95 Response Time metric on the dashboard, it was 3x faster than the p95 response time reported in the test summary on command line side.