Closed cconger closed 5 years ago
Thank you for your contribution. I've just checked and your commit doesn't appear to be signed-off. That's something we need before your Pull Request can be merged. Please see our contributing guide.
Tip: if you only have one commit so far then run: git commit --amend --signoff
and then git push --force
.
Since the log lines are properly preserved from the underlying application I would be ok to reintroduce the stderr:
and stdout:
prefixes because they do get mashed together in kubernetes however it might be reasonable to put the burden on the producer to generate that depending on the function author's use case.
You will be able to retrieve this change through the following release https://github.com/openfaas-incubator/of-watchdog/releases/tag/0.7.0
A multi-arch Docker image can be used (faster, simpler), or you can pull the binary from the releases URL (the older way)
Please could you try the new version with one of the existing templates like the node10-express and golang-http templates available via faas-cli template store list / pull
?
Doing that now against 0.7.0
Tested generating a new golang-http
function:
• faas new --lang golang-http go-new-fn
• Modifying the template Dockerfile to use of-watchdog:0.7.0
• Added a write to stdout in the handler
• faas up -f go-new-fn
• faas invoke go-new-fn
• faas logs go-new-fn
WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) Forking - ./handler []
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 Started logging stderr from function.
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 Started logging stdout from function.
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 OperationalMode: http
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 Timeouts: read: 10s, write: 10s hard: 10s.
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 Listening on port: 8080
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 Writing lock-file to: /tmp/.lock
2019-08-29T21:02:41Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:41 Metrics listening on port: 8081
2019-08-29T21:02:52Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:52 Returning Hello world, input was: test
2019-08-29T21:02:52Z go-new-fn (go-new-fn-5bb8c94959-87vn8) 2019/08/29 21:02:52 POST / - 200 OK - ContentLength: 28
Thanks for doing that.
What I think is hard to distinguish now, is whether we're seeing output from stdout or stderr. There is also no good way to determine if the output was from the watchdog or the function process.
This is unrelated, but the verbosity of the logs command can be turned down a bit with --instance=false --name=false
.
Tested generating a new node10-express
:
faas new --lang node10-express test-js-fn
# update Dockerfile in template/node10-express to use of-watchdog:0.7.0
# update image-name to cconger/test-js-fn:latest
# add console.log to invocation body
faas up -f test-js-fn.yml
faas invoke test-js-fn
faas logs test-js-fn --instance=false --name=false
Results in:
WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
2019-08-29T21:17:59Z Forking - node [index.js]
2019-08-29T21:17:59Z 2019/08/29 21:17:59 Started logging stderr from function.
2019-08-29T21:17:59Z 2019/08/29 21:17:59 Started logging stdout from function.
2019-08-29T21:17:59Z 2019/08/29 21:17:59 OperationalMode: http
2019-08-29T21:17:59Z 2019/08/29 21:17:59 Timeouts: read: 15s, write: 15s hard: 10s.
2019-08-29T21:17:59Z 2019/08/29 21:17:59 Listening on port: 8080
2019-08-29T21:17:59Z 2019/08/29 21:17:59 Metrics listening on port: 8081
2019-08-29T21:17:59Z 2019/08/29 21:17:59 Writing lock-file to: /tmp/.lock
2019-08-29T21:17:59Z 2019/08/29 21:17:59 OpenFaaS Node.js listening on port: 3000
2019-08-29T21:18:29Z 2019/08/29 21:18:29 From app: { status: 'You said: "test\\n"' }
2019-08-29T21:18:29Z 2019/08/29 21:18:29 POST / - 200 OK - ContentLength: 34
I didn't know about those flags, very helpful!
I agree with your point about hard to differentiate (especially since I added "From app" to my own logging output. I'm very ok with returning stderr:
stdout:
for the lines now that we've reached line consistency. Especially since they are collated by kubernetes without annotation.
The flags were part of the OpenFaaS Insiders updates. I can send you one or two of the past issues. Also publicly available on the release notes on the repo, but harder to keep track of all of them. https://github.com/users/alexellis/sponsorship
I would like structured logging to be well supported. What do you think if the prefixes are enabled by default, but easy to disable?
Adding the prefix indicating the fd
of origin should not be any more detrimental than the timestamp prefix already being included which will only come into play if a single payload is larger than 64Kb. I'm happy to have it be the default without an option (working on this PR now), but if you think it should be controllable I can add that as well.
If going the controllable route it might be nice to control all prefixes or none.
Opened #80 to re-add them on by default.
Description
Replaced the 256 byte slurping goroutine with a bufio.Scanner powered one. Reduced some code duplication by adding a go routine spawner for passing through logging Made the http executor use cmd.Wait for closing the forked process instead of catching EOFs on stderr/out. Removed a vestigial field
Stderr
on theHttpRunner
Motivation and Context
This was done to address the gotchas and generally poor user experience I've been having with using stderr and stdout for logging in my wrapped function process. The issues have been listed in #75. There is another proposed solution in #77 which seeks to address the problem by increasing the buffer size (and also allowing writing to stdout).
Where this one is superior is that it allows common log line prefixing to exist for log lines generated by the watchdog itself and those piped from the wrapped function process. It can accomplish this because the scanner splitting on new lines ensures that line breaks from the wrapped function are preserved (up to a single line size of 64Kb). If arbitrary splitting is done based on a buffer size as we currently do with 256 bytes, we're likely to get split lines that did not exist in the original output. If we just increase that buffer size and pass-through from the wrapped process like we do in #77 we now potentially include line breaks and formatting from the wrapped process that result in a non-uniform output in the watchdogs logs.
How Has This Been Tested?
I tested this using a contrived http mode go function named
chatty-fn
that just logs requests that it receives and a bunch of other extra information. I built thechatty-fn
docker container and included different watchdogs to wrap it. I then boot the docker container directly, and issue network requests directly to it.When built using the 0.5.3 of-watchdog it yields the following outputs: stdout.log
stderr.log
When building with this branch the resulting log lines look like: stdout.log
stderr.log
Testing on Kubernetes
Additionally I've published two docker images for
chatty-fn
on Docker Hubcconger/chatty-fn:0.5.3
andcconger/chatty-fn:latest
.I deployed them onto Kubernetes based deployment of Openfaas.
I tested reading the logs via kubectl. Original
New
And via faas-cli logs:
Original
New
Types of changes
This is a breaking change since it changes the default output location (things can actually appear in stdout now) and drastically changes the shape of output logs.
Documentation or release notes should likely be updated to alert consumers to this behavior change.
Checklist:
git commit -s