keboola / docker-bundle

Docker Runner component
MIT License
1 stars 1 forks source link

Run container as a non-root user #198

Open ondrejhlavacek opened 7 years ago

ondrejhlavacek commented 7 years ago

Cures the chown script in https://github.com/keboola/syrup-router/issues/64

ondrejhlavacek commented 6 years ago

po změně storage driveru se musí vše zbuildovat znova (to je taky super test)

ondrejhlavacek commented 6 years ago

tak už to všechno frčí na devicemapperu

ondrejhlavacek commented 6 years ago
# time docker ps -a
real    0m8.412s
user    0m0.036s
sys     0m0.052s

pak se to zřejmě nacachuje a běží to rychleji

ondrejhlavacek commented 6 years ago

pouštím na overlay2

ondrejhlavacek commented 6 years ago

hm, tak jsem to nechal chvíli vyhnít a taky byl pomalej docker ps

ondrejhlavacek commented 6 years ago

zkusím do toho loadu ještě pustit tu orchestraci a pak si pohraju ještě s těma stress testama, mám takovej pocit, že jsem tu instanci fakt mohl přetížit, a tak nejede nic, zkusím trochu pomalejc :-)

integrační orchestrace, instance pod full loadem, overlay2: https://connection.keboola.com/admin/projects/395/orchestrations/292003114/jobs/402917584

ondrejhlavacek commented 6 years ago

a ještě u toho zkusím v cyklu pouštět

time sudo docker run --rm --volume=/tmp:/data alpine sh -c "chown 501 /data -R && chmod -R u+wrX /data"
ondrejhlavacek commented 6 years ago

overlay2 teď pod full loadem a integrační orchestrací

$ time sudo docker run --rm --volume=/tmp:/data alpine sh -c "chown 501 /data -R && chmod -R u+wrX /data"

real    3m10.875s
ondrejhlavacek commented 6 years ago

zajímavej poznatek, po tom, co skončily IO loady, se to celý dost svižně rozběhlo.

ondrejhlavacek commented 6 years ago

ověřím totéž u devicemapperu a pak zkusím experimentovat s jinejma ebskama.

ondrejhlavacek commented 6 years ago

CPU je full, ale odezva docker je svižná

image

odinuv commented 6 years ago

no, kdyz se nam to sekalo, tak tam nebyl zadnej cpu load, ale byly spicky v network use

ondrejhlavacek commented 6 years ago

Hm, dobrá pointa, zkusím izolovat net a io stresstesty a zkoušet je separátně.

On Fri, Jun 8, 2018, 5:00 PM Ondrej Popelka notifications@github.com wrote:

no, kdyz se nam to sekalo, tak tam nebyl zadnej cpu load, ale byly spicky v network use

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/keboola/docker-bundle/issues/198#issuecomment-395788209, or mute the thread https://github.com/notifications/unsubscribe-auth/AAeYC-bVwN7ujsNFQO5E3A8o2DYSmWPIks5t6pF6gaJpZM4Pzjm2 .

ondrejhlavacek commented 6 years ago

chown container test

test command

time sudo docker run --rm --volume=/tmp:/data alpine sh -c "chown 501 /data -R && chmod -R u+wrX /data"

test runy

overlay2 baseline

overlay2, io load (write 4 kontejnery, bez io throttlingu)

overlay2, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)

overlay2, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)

overlay2, cpu load (4 kontejnery s Rkem)

devicemapper baseline

devicemapper, io load (write 4 kontejnery, bez io throttlingu)

devicemapper, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)

devicemapper, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)

devicemapper, cpu load (4 kontejnery s Rkem)

vfs baseline

ondrejhlavacek commented 6 years ago

docker build test

test command

time sudo docker build . -t test --no-cache

(https://github.com/keboola/docker-custom-php/blob/master/Dockerfile)

test runy

overlay2 baseline

overlay2, io load (write 4 kontejnery, bez io throttlingu)

overlay2, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)

overlay2, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)

overlay2, cpu load (4 kontejnery s Rkem)

devicemapper baseline

devicemapper, io load (write 4 kontejnery, bez io throttlingu)

devicemapper, io load (read/write 4 kontejnery, bez throttlingu, cpu zatíženo naplno)

devicemapper, network + read load (tx 4 kontejnery, bez throttlingu, transport do AWS)

devicemapper, cpu load (4 kontejnery s Rkem)

vfs baseline

ondrejhlavacek commented 6 years ago

docker custom science app 4x

devicemapper

$ time sudo docker run --volume /home/deploy/stresstest-01-a:/data --volume /tmp/stresstest-01-a:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-a stresstest-01-a
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 185.63606309891 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 530.50450515747 seconds

3815 MB file uploaded to S3 using 'upload' method in 223.22940206528 seconds
3815 MB file uploaded to S3 using 'putObject' method in 399.95189595222 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 110.88334989548 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 572.76816511154 seconds

$ time sudo docker run --volume /home/deploy/stresstest-01-b:/data --volume /tmp/stresstest-01-b:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-b stresstest-01-b
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 194.08733892441 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 563.34004592896 seconds

3815 MB file uploaded to S3 using 'upload' method in 255.42728185654 seconds
3815 MB file uploaded to S3 using 'putObject' method in 500.54386901855 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 374.98837304115 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 263.79122304916 seconds

$ time sudo docker run --volume /home/deploy/stresstest-01-c:/data --volume /tmp/stresstest-01-c:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-c stresstest-01-c
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 214.85957789421 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 518.7354388237 seconds

3815 MB file uploaded to S3 using 'upload' method in 264.15709090233 seconds
3815 MB file uploaded to S3 using 'putObject' method in 508.429489851 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 382.1209089756 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 442.99249696732 seconds

$ time sudo docker run --volume /home/deploy/stresstest-01-d:/data --volume /tmp/stresstest-01-d:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-d stresstest-01-d
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 209.45598912239 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 513.48875117302 seconds

3815 MB file uploaded to S3 using 'upload' method in 274.20138287544 seconds
3815 MB file uploaded to S3 using 'putObject' method in 544.93667316437 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 385.1067841053 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 434.19953203201 seconds

overlay2

$ time sudo docker run --volume /home/deploy/stresstest-01-a:/data --volume /tmp/stresstest-01-a:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-a stresstest-01-a
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 186.00001692772 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 515.93590307236 seconds

3815 MB file uploaded to S3 using 'upload' method in 223.27499222755 seconds
3815 MB file uploaded to S3 using 'putObject' method in 399.70661902428 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 114.55358815193 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 573.85721898079 seconds

$ time sudo docker run --volume /home/deploy/stresstest-01-b:/data --volume /tmp/stresstest-01-b:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-b stresstest-01-b
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 232.66179084778 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 513.63398194313 seconds

3815 MB file uploaded to S3 using 'upload' method in 262.86685800552 seconds
3815 MB file uploaded to S3 using 'putObject' method in 542.16835808754 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 372.02717995644 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 470.6129181385 seconds

$ time sudo docker run --volume /home/deploy/stresstest-01-c:/data --volume /tmp/stresstest-01-c:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-c stresstest-01-c
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 234.57685494423 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 513.85321807861 seconds

3815 MB file uploaded to S3 using 'upload' method in 258.25743412971 seconds
3815 MB file uploaded to S3 using 'putObject' method in 515.28938698769 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 387.1915678978 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 477.84036588669 seconds

$ time sudo docker run --volume /home/deploy/stresstest-01-d:/data --volume /tmp/stresstest-01-d:/tmp --memory 8192m --memory-swap 8192m --net bridge --cpus 2 --env KBC_DATADIR=/data/ --name stresstest-01-d stresstest-01-d
4000000 rows with 1 columns by 1000 bytes (3815 MB) generated in 234.0717651844 seconds
4000000 rows with 1 columns by 1000 bytes (3815 MB) split into 10 files using CsvFile in 518.4071290493 seconds

3815 MB file uploaded to S3 using 'upload' method in 253.36659407616 seconds
3815 MB file uploaded to S3 using 'putObject' method in 515.35333395004 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'uploadAsync' method in 381.43002700806 seconds
3815 MB split into 10 files (1 chunks) uploaded to S3 using 'putObjectAsync' method in 400.50829100609 seconds
ondrejhlavacek commented 6 years ago

nějaký snippety kódu jsem hodil do https://github.com/keboola/io-stress-debug

Halama commented 6 years ago

u toho devicemapperu pisou zero-configuration, has very poor performance. ne direct-lvm konfigurace.

ondrejhlavacek commented 6 years ago

Jj, ale už jsem se neměl čeho chytit, něco jsem zkusit musel :-)

On Mon, Jun 11, 2018, 9:13 AM Martin Halamíček notifications@github.com wrote:

u toho devicemapperu pisou zero-configuration, has very poor performance. ne direct-lvm konfigurace.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/keboola/docker-bundle/issues/198#issuecomment-396144535, or mute the thread https://github.com/notifications/unsubscribe-auth/AAeYC-fAHZp6ciRnv7_2pqL2b2c9g9eqks5t7hiDgaJpZM4Pzjm2 .

ondrejhlavacek commented 6 years ago

integrační orchestrace

na prázdno

devicemapper

overlay2

vfs

ondrejhlavacek commented 6 years ago

Todo: vyzkoušet totéž s io throttlingem

ondrejhlavacek commented 6 years ago

První overlay2 test (https://github.com/keboola/docker-bundle/issues/198#issuecomment-396052207) byl brutálně pomalej a patrně šlo o nějakej glitch, teď jsem to pustil 2x po sobě a vypadá to líp (srovnatelně s devicemapperem). Komentář jsem aktualizoval.

ondrejhlavacek commented 6 years ago

IO throttling ničemu nepomůže, patrně je 50m moc vysoký :-)

ondrejhlavacek commented 6 years ago

Pustil jsem se do testování vfs, ale to zastavuju. Rychlost, jakou to konzumuje místo na disku je fascinující! S 50 GB pro docker to není schopný ani dojet integrační orchestraci.

ondrejhlavacek commented 6 years ago

🏆 vítěz je teda podle všeho overlay2, zkusím ho v nejbližších dnech narolovat.

odinuv commented 6 years ago

bude post na blog?

ondrejhlavacek commented 6 years ago

jestli se nám to povede úspěšně narolovat, tak by mohl.

Halama commented 6 years ago

Postupně se nahazuje overlay2 do produkce. Eviduju to tady https://github.com/keboola/syrup-router/pull/99#issuecomment-396596708