Closed nupplaphil closed 1 month ago
My new guess: the new log streaming does not buffer and let the stdout pipe do ioWait untill grpc did ack it got send.
So if we just wrap that pipe into a buffered one it should work
I had symptoms similar to your, when I built Woodpecker and ephemeral storage was not enough (in Kubernetes though). Containerd downloads images (some of them over 1GB), unpacks them and runs containers/pods/steps. In Kubernetes pods (steps) are cleared when whole pipeline finishes. So, in the end of pipeline there a little storage was left and downloaded images and its unpacked versions were deleted too. Next pipeline starts from scratch: download, unpack, etc.
We have this problem, too. Not sure with which version it started. We have a repository with a 7GB image which is much slower than another repository with normal size images, so docker image sizes definitely are involved. Strangely it also seems to be depending on the user who pushed. It was tolerable before, but now it takes forever and causes high IO on the VM.
well it it's a pure docker thing, woodpecker can not do much. we also did not change the functionality of the docker backend so I would be surprised if it is caused by us.
But we need more data, just some "feeling" dont help :sweat:
on there are now two images based ontop of next:
woodpeckerci/woodpecker-agent:pull_2072
woodpeckerci/woodpecker-agent:pull_2074
witch both have 10Kb buffer for log streaming
would be nice if there performance can be tested (I'll do testing myselfe too)
@6543 Awesome, thanks! :slightly_smiling_face: woodpeckerci/woodpecker-agent:pull_2072
seems to solve the problem for us. Will do some further tests to make sure and also test woodpeckerci/woodpecker-agent:pull_2074
.
pull_2074 is in the pipeline: https://ci.woodpecker-ci.org/repos/3780/pipeline/6932
please tell what's faster :) (if there is a difference)
no diff so stdlib is preferred
ok it will be fixed by #2072
we just need to address how to handle the new input as currently chunks are somewhat hardcoded as "one line" and with that it would mean that there is no "live"-log streaming ... so it get's complicated.
you can use the pull preview image as hotfix, until I figured out how to log stream and use a buffer but time based
@6543 Awesome, thanks! slightly_smiling_face
woodpeckerci/woodpecker-agent:pull_2072
seems to solve the problem for us. Will do some further tests to make sure and also testwoodpeckerci/woodpecker-agent:pull_2074
.
Unfortunately I can't reproduce our problem with v1.0
any more either. I think it really has to do something with the user who started the build somehow..
ok then this was something else, but there is an issue with "high-volume" log outputing steps
Component
agent
Describe the bug
I upgraded to the latest woodpecker "Next" version about a few weeks ago.
Since then, I notice that woodpecker is slowed down a lot (about 5-6 times slower).
I cannot really say why, but as far as I can see, it's related to the amount of console-output.
php-lint
step slowed down from 0:30 to 9-10 minutes, and it's "just" static linting the code and dumping it to the outputmariadb
service is starting quite fast, no delay foundphpunit
step is slowed from 2 minutes to 4-5 minutes, but has not as much output asphp-lint
, but it's FAR more complexThe .woodpecker files can be found at https://github.com/friendica/friendica/blob/develop/.woodpecker/.phpunit.yml
2 month ago, it was like this:![image](https://github.com/woodpecker-ci/woodpecker/assets/379654/d0a311fa-74d7-4e44-a1ad-917f15bcd378)
now it's:![image](https://github.com/woodpecker-ci/woodpecker/assets/379654/36cb4259-a54a-4b0f-b5a4-a5cac0426cd8)
I moved to an exclusive, brand new root-server with 128GB ram, i7 and only docker and woodpecker to have a look if it's an IO/CPU/.. issue --> nope the result stays the same
System Info
docker-compose (server - slow node):