Closed nlamirault closed 2 years ago
Hi @nlamirault - sinks.console.encoding.codec
is actually required to be set to either json
(for the entire event) or text
for just the .message
field to be printed. However that's not reflected on the site currently, I'll try and put a fix in for that today.
Let me know if you have any other issues!
https://github.com/vectordotdev/vector/pull/11645 <- was merged earlier to fix the config examples section, I just cherry-picked it into the live docs version
https://github.com/vectordotdev/vector/pull/11535 <- noting here, that the way we're currently generating the documentation doesn't pass the required
field properly into object children. There's a JIRA issue open to have one of our web/docs people take a look at fixing the code there.
Thanks @spencergilbert I will try that.
it works fine. Thanks
I try to enable LogDNA sink like that:
sinks:
console:
type: console
inputs:
- kubernetes
target: stdout
encoding:
codec: json
logdna:
type: logdna
inputs:
- kubernetes
api_key: "${LOGDNA_API_KEY}"
hostname: portefaix-homelab
healthcheck:
enabled: true
And pod crashloop with this logs :
✖ kubectl -n logging logs -f test-vector-4r5vt 2022-04-05T07:14:22.368086Z INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info"
2022-04-05T07:14:22.368899Z INFO vector::app: Loading configs. paths=["/etc/vector"]
2022-04-05T07:14:22.463302Z ERROR vector::cli: Configuration error. error=sinks.logdna: invalid type: unit value, expected a string at line 14 column 12
Any idea @spencergilbert ?
The error messages that bubble up from our serializing/deserializing of the config leave much to be desired...
Looking at the configuration options for that sink my initial thinking would be that LOGDNA_API_KEY
isn't getting set? Could you share your full values.yaml
?
Ok bad configuration for the secret. So environment variable is not correctly set. Thanks.
Hi, I try to check Vector on my K3S homelab. I set the Helm chart values like that:
All pod crashloop with this error :
Any idea about a configuration problem ? Thanks.