Open porridge opened 9 months ago
I also encountered this problem,this warning will be triggered when running a lot of harnesses
This thread might (or not) have some hints on how to approach this.
controller-runtime
expects this struct for logging. We can likely use logr.New(...) for this. But the existing Logger
interface would need to be made compatible with logr.LogSink.
More effort but does it make sense to migrate to zap
?
the existing
Logger
interface would need to be made compatible with logr.LogSink.
This should be possible without touching the existing methods, right?
What would be the benefits of using zap
? Sorry, I never quite figured out the logging landscape of kubernetes :-)
the existing
Logger
interface would need to be made compatible with logr.LogSink.This should be possible without touching the existing methods, right?
Not really, since the interface signatures must match.
What would be the benefits of using
zap
? Sorry, I never quite figured out the logging landscape of kubernetes :-)
zap
is pretty much now the standard with K8s Operators now since it works well with controller-runtime
. It's also easy to integrate with cobra
and has neat structured logs.
the existing
Logger
interface would need to be made compatible with logr.LogSink.This should be possible without touching the existing methods, right?
Not really, since the interface signatures must match.
I don't see any overlapping methods, am I missing something?
What would be the benefits of using
zap
? Sorry, I never quite figured out the logging landscape of kubernetes :-)
zap
is pretty much now the standard with K8s Operators now since it works well withcontroller-runtime
. It's also easy to integrate withcobra
and has neat structured logs.
Right. However what matters for kuttl is whether it can act as a proxy to testing.T.Log
?
I am also facing this topic but i only have one harness.
Seeing the same issue on every test run.
Same here.
Please consider that this is making all testing pipelines to fail since debug output is written on STDERR
=== RUN kuttl
harness.go:464: starting setup
harness.go:252: running tests with KIND.
harness.go:176: temp folder created /tmp/kuttl2512534945
harness.go:205: node mount point /var/lib/docker/volumes/kuttl-test-0/_data
harness.go:158: Starting KIND cluster
kind.go:66: Adding Containers to KIND...
[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
> goroutine 4 [running]:
> runtime/debug.Stack()
> /nix/store/wkbckbd30nlhq4dxzg64q6y4vm1xx4fk-go-1.22.1/share/go/src/runtime/debug/stack.go:24 +0x5e
> sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
> /home/mowsiany/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/log/log.go:60 +0xcd
> sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc00005e780, {0x19e4a47, 0x14})
> /home/mowsiany/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/log/deleg.go:147 +0x3e
> github.com/go-logr/logr.Logger.WithName({{0x1c96d40, 0xc00005e780}, 0x0}, {0x19e4a47?, 0x10?})
> /home/mowsiany/go/pkg/mod/github.com/go-logr/logr@v1.4.2/logr.go:345 +0x36
> sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc000649690?, {0x0, 0xc000444af0, {0x1c98520, 0xc00011c550}, 0x0, 0x0})
> /home/mowsiany/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/client/client.go:118 +0xdb
> sigs.k8s.io/controller-runtime/pkg/client.New(0xc0003bc008?, {0x0, 0xc000444af0, {0x1c98520, 0xc00011c550}, 0x0, 0x0})
> /home/mowsiany/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.0/pkg/client/client.go:98 +0x55
> github.com/kudobuilder/kuttl/pkg/test/utils.NewRetryClient(0xc0003bc008, {0x0, 0xc000444af0, {0x1c98520, 0xc00011c550}, 0x0, 0x0})
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/utils/kubernetes.go:177 +0xf1
> github.com/kudobuilder/kuttl/pkg/test/utils.WaitForSA(0xc0003bc008, {0x19d0941, 0x7}, {0x19d0941, 0x7})
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/utils/kubernetes.go:994 +0x85
> github.com/kudobuilder/kuttl/pkg/test.(*Harness).waitForFunctionalCluster(0xc00026e308)
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:294 +0x38
> github.com/kudobuilder/kuttl/pkg/test.(*Harness).Config(0xc00026e308)
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:275 +0x450
> github.com/kudobuilder/kuttl/pkg/test.(*Harness).Client(0xc00026e308, 0xb0?)
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:318 +0x11b
> github.com/kudobuilder/kuttl/pkg/test.(*Harness).Setup(0xc00026e308)
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:466 +0x225
> github.com/kudobuilder/kuttl/pkg/test.(*Harness).Run(0xc00026e308)
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:455 +0x59
> github.com/kudobuilder/kuttl/pkg/kuttlctl/cmd.newTestCmd.func2.1(0xc0001411e0)
> /home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/kuttlctl/cmd/test.go:236 +0x105
> testing.tRunner(0xc0001411e0, 0xc00018acf0)
> /nix/store/wkbckbd30nlhq4dxzg64q6y4vm1xx4fk-go-1.22.1/share/go/src/testing/testing.go:1689 +0xfb
> created by testing.(*T).Run in goroutine 1
> /nix/store/wkbckbd30nlhq4dxzg64q6y4vm1xx4fk-go-1.22.1/share/go/src/testing/testing.go:1742 +0x390
harness.go:278: Successful connection to cluster at: https://127.0.0.1:41747/
@tuxerrante what do you mean exactly by
making all testing pipelines to fail
?
It's certainly not affecting any of my kuttl usage.. 🤔
Hi, I mean if you have set your internal CI task to not proceed on errors, which I believe it is the standard behaviour on most of the environments, having kuttl to write on stderr make our pipelines to fail until stderr is ignored, which is not something wanted.
Thanks
Which environments are these? I cannot remember seeing a testing system that fails when it sees output on stderr (as opposed to failing on non-zero exit status which I agree is the sane way). In fact, I do recall seeing guidelines which suggest emitting diagnostic output on stderr, rather than stdout, even if these are not fatal. 🤔
What happened:
Noticed this warning in a test run:
What you expected to happen:
No warning :-)
How to reproduce it (as minimally and precisely as possible):
Not sure...
Anything else we need to know?:
Environment:
kubectl version
):kubectl kuttl version
):v0.15.1-0.20240201073944-c735bec25959
uname -a
):