Open ondrejhlavacek opened 7 years ago
po změně storage driveru se musí vše zbuildovat znova (to je taky super test)
tak už to všechno frčí na devicemapperu
# time docker ps -a
real 0m8.412s
user 0m0.036s
sys 0m0.052s
pak se to zřejmě nacachuje a běží to rychleji
pouštím na overlay2
hm, tak jsem to nechal chvíli vyhnít a taky byl pomalej docker ps
zkusím do toho loadu ještě pustit tu orchestraci a pak si pohraju ještě s těma stress testama, mám takovej pocit, že jsem tu instanci fakt mohl přetížit, a tak nejede nic, zkusím trochu pomalejc :-)
integrační orchestrace, instance pod full loadem, overlay2: https://connection.keboola.com/admin/projects/395/orchestrations/292003114/jobs/402917584
a ještě u toho zkusím v cyklu pouštět
time sudo docker run --rm --volume=/tmp:/data alpine sh -c "chown 501 /data -R && chmod -R u+wrX /data"
overlay2 teď pod full loadem a integrační orchestrací
$ time sudo docker run --rm --volume=/tmp:/data alpine sh -c "chown 501 /data -R && chmod -R u+wrX /data"
real 3m10.875s
zajímavej poznatek, po tom, co skončily IO loady, se to celý dost svižně rozběhlo.
ověřím totéž u devicemapperu a pak zkusím experimentovat s jinejma ebskama.
CPU je full, ale odezva docker je svižná
no, kdyz se nam to sekalo, tak tam nebyl zadnej cpu load, ale byly spicky v network use
Hm, dobrá pointa, zkusím izolovat net a io stresstesty a zkoušet je separátně.
On Fri, Jun 8, 2018, 5:00 PM Ondrej Popelka notifications@github.com wrote:
no, kdyz se nam to sekalo, tak tam nebyl zadnej cpu load, ale byly spicky v network use
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/keboola/docker-bundle/issues/198#issuecomment-395788209, or mute the thread https://github.com/notifications/unsubscribe-auth/AAeYC-bVwN7ujsNFQO5E3A8o2DYSmWPIks5t6pF6gaJpZM4Pzjm2 .
time sudo docker run --rm --volume=/tmp:/data alpine sh -c "chown 501 /data -R && chmod -R u+wrX /data"
overlay2 baseline
overlay2, io load (write 4 kontejnery, bez io throttlingu)
overlay2, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)
overlay2, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)
overlay2, cpu load (4 kontejnery s Rkem)
devicemapper baseline
devicemapper, io load (write 4 kontejnery, bez io throttlingu)
devicemapper, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)
devicemapper, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)
devicemapper, cpu load (4 kontejnery s Rkem)
vfs baseline
time sudo docker build . -t test --no-cache
(https://github.com/keboola/docker-custom-php/blob/master/Dockerfile)
overlay2 baseline
overlay2, io load (write 4 kontejnery, bez io throttlingu)
overlay2, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)
overlay2, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)
overlay2, cpu load (4 kontejnery s Rkem)
devicemapper baseline
devicemapper, io load (write 4 kontejnery, bez io throttlingu)
devicemapper, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)
devicemapper, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)
devicemapper, cpu load (4 kontejnery s Rkem)
vfs baseline
devicemapper
$ time sudo docker run --volume /home/deploy/stresstest-01-a:/data --volume /tmp/stresstest-01-a:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-a stresstest-01-a
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 185.63606309891 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 530.50450515747 seconds
3815 MB file uploaded to S3 using 'upload' method in 223.22940206528 seconds
3815 MB file uploaded to S3 using 'putObject' method in 399.95189595222 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 110.88334989548 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 572.76816511154 seconds
$ time sudo docker run --volume /home/deploy/stresstest-01-b:/data --volume /tmp/stresstest-01-b:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-b stresstest-01-b
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 194.08733892441 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 563.34004592896 seconds
3815 MB file uploaded to S3 using 'upload' method in 255.42728185654 seconds
3815 MB file uploaded to S3 using 'putObject' method in 500.54386901855 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 374.98837304115 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 263.79122304916 seconds
$ time sudo docker run --volume /home/deploy/stresstest-01-c:/data --volume /tmp/stresstest-01-c:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-c stresstest-01-c
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 214.85957789421 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 518.7354388237 seconds
3815 MB file uploaded to S3 using 'upload' method in 264.15709090233 seconds
3815 MB file uploaded to S3 using 'putObject' method in 508.429489851 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 382.1209089756 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 442.99249696732 seconds
$ time sudo docker run --volume /home/deploy/stresstest-01-d:/data --volume /tmp/stresstest-01-d:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-d stresstest-01-d
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 209.45598912239 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 513.48875117302 seconds
3815 MB file uploaded to S3 using 'upload' method in 274.20138287544 seconds
3815 MB file uploaded to S3 using 'putObject' method in 544.93667316437 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 385.1067841053 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 434.19953203201 seconds
overlay2
$ time sudo docker run --volume /home/deploy/stresstest-01-a:/data --volume /tmp/stresstest-01-a:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-a stresstest-01-a
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 186.00001692772 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 515.93590307236 seconds
3815 MB file uploaded to S3 using 'upload' method in 223.27499222755 seconds
3815 MB file uploaded to S3 using 'putObject' method in 399.70661902428 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 114.55358815193 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 573.85721898079 seconds
$ time sudo docker run --volume /home/deploy/stresstest-01-b:/data --volume /tmp/stresstest-01-b:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-b stresstest-01-b
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 232.66179084778 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 513.63398194313 seconds
3815 MB file uploaded to S3 using 'upload' method in 262.86685800552 seconds
3815 MB file uploaded to S3 using 'putObject' method in 542.16835808754 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 372.02717995644 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 470.6129181385 seconds
$ time sudo docker run --volume /home/deploy/stresstest-01-c:/data --volume /tmp/stresstest-01-c:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-c stresstest-01-c
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 234.57685494423 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 513.85321807861 seconds
3815 MB file uploaded to S3 using 'upload' method in 258.25743412971 seconds
3815 MB file uploaded to S3 using 'putObject' method in 515.28938698769 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 387.1915678978 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 477.84036588669 seconds
$ time sudo docker run --volume /home/deploy/stresstest-01-d:/data --volume /tmp/stresstest-01-d:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-d stresstest-01-d
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 234.0717651844 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 518.4071290493 seconds
3815 MB file uploaded to S3 using 'upload' method in 253.36659407616 seconds
3815 MB file uploaded to S3 using 'putObject' method in 515.35333395004 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 381.43002700806 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 400.50829100609 seconds
nějaký snippety kódu jsem hodil do https://github.com/keboola/io-stress-debug
u toho devicemapperu pisou zero-configuration, has very poor performance.
ne direct-lvm
konfigurace.
Jj, ale už jsem se neměl čeho chytit, něco jsem zkusit musel :-)
On Mon, Jun 11, 2018, 9:13 AM Martin Halamíček notifications@github.com wrote:
u toho devicemapperu pisou zero-configuration, has very poor performance. ne direct-lvm konfigurace.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/keboola/docker-bundle/issues/198#issuecomment-396144535, or mute the thread https://github.com/notifications/unsubscribe-auth/AAeYC-fAHZp6ciRnv7_2pqL2b2c9g9eqks5t7hiDgaJpZM4Pzjm2 .
devicemapper
overlay2
vfs
Todo: vyzkoušet totéž s io throttlingem
První overlay2 test (https://github.com/keboola/docker-bundle/issues/198#issuecomment-396052207) byl brutálně pomalej a patrně šlo o nějakej glitch, teď jsem to pustil 2x po sobě a vypadá to líp (srovnatelně s devicemapperem). Komentář jsem aktualizoval.
IO throttling ničemu nepomůže, patrně je 50m
moc vysoký :-)
Pustil jsem se do testování vfs
, ale to zastavuju. Rychlost, jakou to konzumuje místo na disku je fascinující! S 50 GB pro docker to není schopný ani dojet integrační orchestraci.
🏆 vítěz je teda podle všeho overlay2
, zkusím ho v nejbližších dnech narolovat.
bude post na blog?
jestli se nám to povede úspěšně narolovat, tak by mohl.
Postupně se nahazuje overlay2
do produkce. Eviduju to tady https://github.com/keboola/syrup-router/pull/99#issuecomment-396596708
Cures the chown script in https://github.com/keboola/syrup-router/issues/64