Closed riccardomengoli closed 2 years ago
@Ale142 could you check those negative values in the result.csv file?
@riccardomengoli can you try to run the nodejs script with a number of concurrent clients from 1 to 32? I think the negative values are due to poor and wrong measurements, probably due to the power of my PC. Also, 6 measurements may be too few.
I also posted some results at https://github.com/luco5826/group03-jwticket/pull/5#issuecomment-1079721950, if needed for further plotting. on this topic, it would probably be better to get more data and compute an average before plotting, nah?
I tried to run again the nodejs script (always with concurrency 1,2,4,8,16,32): now the model and the result.csv
seem to be ok.
Probably my last test generates wrong results from the js script for no reason.
I updated the data by running for (($i = 1); $i -le 256; $i*=2) { node .\index.js -c $i }
, now the model is ok.
The benchmark is complete, however you may want to do some refactoring of the class.
is the graph consistent with the universal scalability law? it's probably fair to assume that it tends to zero as concurrency grows, but visually it also looks a bit flat so I was wondering.
are you aware of a way to impact β in a testing env?
With the latest commit, the token used in loadtest is now randomized as to replicate a "real" environment. The requests that fail are because the tickets are duplicate.
The plots are coherent with the expected results. The stateless server is pretty much flat since there is no syncronization between requests, for the stateful server instead having more concurrent clients limits the throughput in a greater way.
Created a method to plot the benchmark results using the library lets-plot.
The goal was to get something like this:
However, by reading the data obtained with usl4j in the file results.csv there are some problems after 50 concurrent user. In fact, at the moment, there is a spike at 49 and then negative values.