Open schovi opened 7 years ago
Ok, let's think about this. Is there any elegant way to measure them?
@tbrand It is few days I see someone use Docker to measure the performance of something. Maybe that could be the way.
BTW: Using Docker could make this easier to run and integrate for more people. Have you ever consider to use Docker there?
That's good approach:+1: If you have a specific reference, please let me know.
I'm thinking using docker, give me a time to do that.:smile:
@tbrand I tried to do something fast there is running Docker for crystal and rails. https://github.com/tbrand/which_is_the_fastest/pull/33
From a consistency standpoint, you could grab a snapshot of system load and ram usage at the 3/4 mark of the test. That would give you a fairly consistent "under stress" measurement that could be shown as a percentage of the whole.
Or snapshots at certain intervals as well.
Sounds pretty good:+1:
This might be good? https://docs.docker.com/engine/reference/commandline/stats/#examples
This might be good? https://docs.docker.com/engine/reference/commandline/stats/#examples
Ooo, wonder when that was added, looks like it may work well!
Close?!
Using Docker
could have several advantages, and one of them could be getting consumption, #123
I use standard time to measure most things:
alias time='/usr/bin/time -f "\nCPU: %U s\tReal: %e s\tRAM: %M KB"'
in what is that helpful ?
in measuring the max RAM usage '__')
ah, I think this measure will come from the cloud provider we use ... but thanks for the tip :heart:
This would be great if we also have RAM metrics like avg, min, max.
You could take a snapshot of docker stats
every sec of the docker container I guess. I don't know if there is a simpler solution for this.
One library can have an awesome performance for the cost of ten times bigger CPU consumption than other which has like 2 times worse performance. I think it is important metrics.
Metrics can be normalized to something like "10 requests per 1%CPU or 1MB Ram" (just idea :)