Open eugenesimakin opened 9 months ago
Tested on 50, 75, 100, 500 and 1000 unique pseudo users (users were unique for each test run). Container and tests launched on local machine. Test setings and Results are on screenshots in ascending order:
Container and tests launched on local machine.
So, the tests are not clean because jmeter was stealing CPU and IO resources from the app itself.
Anyway, your next task is to replace H2 with some good old database (like, MySQL or PostreSQL) and run the tests again. This time it's better to use different machines for running tests and the app. The "tests" machine should have more hardware resources.
After this task, I think we will need to go to a cloud because of use of a load balancer and managed DB service.
Test has been repeated on local group consisted of two machines. H2 DB 100 requests from 100 unique users: 500 requests from 500 unique users: 1000 requests from 1000 unique users:
PSQL to come...
PostgreSQL DB 100 requests from 100 unique users: 500 requests from 500 unique users: 1000 requests from 1000 unique users:
Objectively: Postgres seems to be much much faster with small counts of requests. But further, when number of requests gets closer to 900, freezes occur. Perhaps the reason is that server machines hardware is pretty old (hdd, slow CPU).
.... hmmm.... maybe test machine is not good enought also.... SSD
Looks like Jmeter GUI makes me think that freezes occur, because server consol runs like the blink of an eye. ....at the beginning. But then freezes realy occur.
Postgres based container reassembled. Trying to get errors via testing on 10000 user requests. Pizdets))) It was 3000 or 4000 requests when everything crashed. Maybe 2500....
At this point it became realy slow. 5-7 seconds of freez. Than 10-20 requests. Than again.
Well, that's it. Test freezed on this point. Almoste 5 minutes have passed and nothing happened.
You did a good job.
Decide if you want invest more time in this or not. The next tasks might be quite boring and, possible, expensive.
It's a lot of tables but I want to ask a different question. Take a look at the first accepted answer of this question https://stackoverflow.com/questions/184814/is-there-some-industry-standard-for-unacceptable-webapp-response-time
If we take response time 500ms as our goal or requirement for public details page only (on average), then how many users the app can handle?
It doesn't really matter if it's 500 requests or 1000, when the response time is too long for a real user to wait.
When you answer the question, you can think (and prove using some metrics and logs) where are bottlenecks. Most likely it's the database. How to improve the performance of the database? Which parts of the application you can cache? Rent a large server in a cloud (expensive)? Use managed database (more expensive)? You can just explore this or implement. It's up to you.
Don't want to pay to a cloud provider? Just explore and implement caching mechanisms. Figure out how many max users your laptop can handle with the 500ms avg response time requirement.
Make one test case.
The test case should:
All usernames of created users should be saved in some storage. Later they will be used in generating extensive load on public pages.
Reference: https://jmeter.apache.org/