Closed spetzreborn closed 9 months ago
Thanks spetzreborn. I'm not sure that the improved granularity is all that meaningful given the considerable variability in the results, both randomly and with the chosen test data. But I agree it's nice to be able to have what looks like a more exact answer.
While tuning the coderunner server I needed more granularity in the --perf option in testsubmit.py to compare between settings.
Doing a performance test now takes longer, but with more comparable and exact output. Ex of new output on my machine:
# python3 testsubmit.py java --perf
Measuring performance in java
1 parallel submits: OK. 0 jobs/sec
2 parallel submits: OK. 1 jobs/sec
4 parallel submits: OK. 1 jobs/sec
8 parallel submits: OK. 3 jobs/sec
16 parallel submits: OK. 4 jobs/sec
32 parallel submits: OK. 4 jobs/sec
64 parallel submits: OK. 5 jobs/sec
128 parallel submits: FAIL.
96 parallel submits: FAIL.
80 parallel submits: FAIL.
72 parallel submits: FAIL.
68 parallel submits: FAIL.
66 parallel submits: OK. 4 jobs/sec
67 parallel submits: FAIL.
Maximum burst handled with no errors = 66 jobs
Checking maximum sustained throughput over 30 sec window
Testing with rate of 1 jobs/sec: OK Testing with rate of 2 jobs/sec: OK Testing with rate of 3 jobs/sec: OK Testing with rate of 4 jobs/sec: OK Testing with rate of 5 jobs/sec: OK Testing with rate of 6 jobs/sec: OK Testing with rate of 7 jobs/sec: Failed Sustained throughout rate: 6 jobs/sec