Closed s7an-it closed 1 year ago
After doing more checks I see some missing links that even if setting the helm override options with max_mb for example 1023 result in this message. I tried to follow-up the code changes on both ends and I think I failed to see the mapping. https://github.com/falcosecurity/libs/blob/master/userspace/libsinsp/socket_handler.h @ldegio @zuc, any idea?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Provide feedback via https://github.com/falcosecurity/community. /close
@poiana: Closing this issue.
Describe the bug
When check logs of daemonset I see giving up on read more than 100 MB of data for k8s_replicaset_handler_state while setting metadata_download.max_mb is set to 200 in configmap/init yamls. k8s_handler (k8s_replicaset_handler_state::collect_data()[https://x.x.x.x] an error occurred while receiving data from k8s_replicaset_handler_state, m_blocking_socket=1, m_watching=0, Socket handler (k8s_replicaset_handler_state): read more than 100 MB of data from https://x.x.x.x/apis/apps/v1/replicasets?pretty=false (104858058 bytes, 104263 reads). Giving up
How to reproduce it
Deployed with helm chart 3.1.3, the helm was rendered and deployed as pure yaml by ArgoCD. EKS version 1.22. The configmap is seen to take the change but the log message persist and pod of daemonset was restarted after. Values.yaml setup: falcosidekick: enabled: true config: fission.function: "falco-pod-delete" webui: enabled: true falco: json_output: true
Container orchestrator metadata fetching params
metadata_download:
-- Max allowed response size (in Mb) when fetching metadata from Kubernetes.
driver: kind: ebpf
Expected behaviour
Modifying this value in Helm and configmap should change the message to 200 MB.
Screenshots
Environment
cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
uname -a
Linux falco-vkb7s 5.4.231-137.341.amzn2.x86_64 #1 SMP Tue Feb 14 21:50:55 UTC 2023 x86_64 GNU/Linux
Additional context It can be seen from the config.yaml that the change from configmap is loaded but has no effect (perhaps only the logger doesn't reflect it and I have went over 200, I will check this:
cat /etc/falco/falco.yaml
buffered_outputs: false file_output: enabled: false filename: ./events.txt keep_alive: false grpc: bind_address: unix:///run/falco/falco.sock enabled: false threadiness: 0 grpc_output: enabled: false http_output: enabled: true url: http://falco-falcosidekick:2801 user_agent: falcosecurity/falco json_include_output_property: true json_include_tags_property: true json_output: true libs_logger: enabled: false severity: debug load_plugins: [] log_level: info log_stderr: true log_syslog: true metadata_download: chunk_wait_us: 1000 max_mb: 200 watch_freq_sec: 1 modern_bpf: cpus_for_each_syscall_buffer: 2 output_timeout: 2000 outputs: max_burst: 1000 rate: 1 plugins: