Closed csomh closed 3 years ago
It seems that kubernetes cuts the logs somehow and we don't get all of them - is there buffering present?
Not sure. In Sandcastle ws_client.read_all()
should be the one to return the logs.
it would be nice to create a test case in sandcastle which would first try to reproduce this; actually I'm gonna try that right now :D
Hm, the test is passing just fine (doing cat /etc/services
, which is ~11k of text)
The next thing I checked was the DB field, it's text - https://docs.sqlalchemy.org/en/13/core/type_basics.html#sqlalchemy.types.Text
which says A variably sized string type.
, even psql's docs say the same:
text variable unlimited length
I'm clueless now, would be nice to repro this in p-s' tests
@TomasTomecek : node.js is particularly evil in the sense that it often damages stdout/stderr by setting it to non-blocking. We've had a number of hideous workarounds for that as well. However, I tried to work around that, and it didn't help, it still fails the same way.
FTR, as I mentioned in https://github.com/packit/sandcastle/pull/90 I can't reproduce this in a local sandcastle container.
@martinpitt Martin, thanks for the pointers. We need to identify where exactly the problem is - if it's the service infrastructure, I'm pretty sure it is, it won't be reproducible locally, which you already proved. I hope we'd have capacity to work on it next sprint.
I can confirm that npm's terminal sorcery indeed breaks openshift's way of picking up output of the pod:
13:25:32.449 commands.py DEBUG Command: make dist-gzip # if it exists already, npm install won't update it; force that so that we always get up-to-date packages
rm -f package-lock.json # unset NODE_ENV, skips devDependencies otherwise
env -u NODE_ENV npm install
npm WARN deprecated eslint-loader@4.0.2: This loader has been deprecated. Please use eslint-webpack-plugin
npm WARN deprecated popper.js@1.16.1: You can find the new Popper v2 at @popperjs/core, this package is dedicated to the legacy v1
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated core-js@2.6.12: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of c
ore-js@3.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
there should be way-more output here and this is directly picked up from running exec and getting output via python-kubernetes
@TomasTomecek : Could you try again with https://github.com/cockpit-project/cockpit-podman/pull/649/commits/0a81eec79f77ac7e317c5d5f6626db62de3987c5 ? I was hoping that should tame the tty retardation
@TomasTomecek : Could you try again with cockpit-project/cockpit-podman@0a81eec ? I was hoping that should tame the tty retardation
I got fed up and ran all of the stuff in openshift and got this:
More info about useBuiltIns: https://babeljs.io/docs/en/babel-preset-env#usebuiltins
More info about core-js: https://babeljs.io/docs/en/babel-preset-env#corejs
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x7fb628d415cc node::Abort() [/lib64/libnode.so.72]
2: 0x7fb628b8126c node::OnFatalError(char const*, char const*) [/lib64/libnode.so.72]
3: 0x7fb6290b3b5a v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/lib64/libnode.so.72]
4: 0x7fb6290b3de2 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/lib64/libnode.so.72]
5: 0x7fb629233f99 [/lib64/libnode.so.72]
6: 0x7fb629248927 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/lib64/libnode.so.72]
7: 0x7fb62924968a v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/lib64/libnode.so.72]
8: 0x7fb62924b96c v8::internal::Heap::AllocateRawWithLightRetry(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/lib64/libnode.so.72]
9: 0x7fb62924b9d8 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/lib64/libnode.so.72]
10: 0x7fb62921339f v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/lib64/libnode.so.72]
11: 0x7fb6295222ce v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/lib64/libnode.so.72]
12: 0x7fb629bb1a39 [/lib64/libnode.so.72]
make: *** [Makefile:75: dist/manifest.json] Aborted (core dumped)
so this seems like 512MB is not enough for npm/nodejs :/
we already have a task to increase memory of our worker so it may be time to get more memory for the sandbox as well
Running the commands in a container with memory limited to 512M yielded the same result for me. 768M seems to be enough for this particular case, though I'm not sure how much of this is used up.
sadly the workaround sh -ec '(make dist-gzip 2>&1 </dev/null) | cat'
did not work and the log lines were still missing :/
In this case logs from the
create_archive
action were incomplete and the srpm build had to be run locally to get the logs which actually helped solving the issue.The complete logs would have been something like:
This might be an issue in the way logs are streamed back from sandcastle, but needs further investigation in order to be able to tell if it's indeed the case.
This was noticed by @marusak while working on cockpit-project/cockpit-ostree#166.