deepflowio / deepflow

eBPF Observability - Distributed Tracing and Profiling
https://deepflow.io
Apache License 2.0
2.87k stars 318 forks source link

[BUG] agnt无法运行在内核版本5.10.0-60.18.0.50.oe2203.x86_64 上 #4605

Closed ZSJGG closed 11 months ago

ZSJGG commented 11 months ago

Search before asking

DeepFlow Component

Agent

What you expected to happen

agent貌似无法适应此内核

How to reproduce

No response

DeepFlow version

main分支 commit id 2f9b9e47

DeepFlow agent list

No response

Kubernetes CNI

No response

Operation-System/Kernel version

Linux master-46 5.10.0-60.18.0.50.oe2203.x86_64 #1 SMP Wed Mar 30 03:12:24 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Anything else

agent完整日志如下: [2023-10-30 14:31:56.548086 +08:00] INFO [src/trident.rs:328] static_config Config { controller_ips: [ "11.96.190.21", ], controller_port: 30035, controller_tls_port: 30135, controller_cert_file_prefix: "", log_file: "/var/log/deepflow-agent/deepflow-agent.log", kubernetes_cluster_id: "", kubernetes_cluster_name: None, vtap_group_id_request: "g-D3KjLMvyGY", controller_domain_name: [ "deepflow-service", ], agent_mode: Managed, override_os_hostname: None, async_worker_thread_number: 16, } [2023-10-30 14:31:56.548767 +08:00] INFO [src/trident.rs:365] ==================== Launching DeepFlow-Agent ==================== [2023-10-30 14:31:56.548909 +08:00] INFO [src/trident.rs:366] Environment variables: "K8S_NODE_IP_FOR_DEEPFLOW=172.16.232.142 CTRL_NETWORK_INTERFACE= K8S_POD_IP_FOR_DEEPFLOW=172.16.232.142 IN_CONTAINER=yes K8S_MEM_LIMIT_FOR_DEEPFLOW= ONLY_WATCH_K8S_RESOURCE=" [2023-10-30 14:31:56.551797 +08:00] INFO [src/trident.rs:370] use K8S_NODE_IP_FOR_DEEPFLOW env ip as destination_ip(172.16.232.142) [2023-10-30 14:31:56.551859 +08:00] INFO [src/trident.rs:375] agent running in Managed mode, ctrl_ip 172.16.232.142 ctrl_mac fa:16:3e:da:bb:93 [2023-10-30 14:31:56.556750 +08:00] INFO [src/config/config.rs:191] set kubernetes_cluster_id to d-4rbvcALEBK [2023-10-30 14:31:56.556826 +08:00] WARN [src/trident.rs:408] When running in a K8s pod, the cpu and memory limits notified by deepflow-server will be ignored, please make sure to use K8s for resource limits. [2023-10-30 14:31:56.559061 +08:00] INFO [src/trident.rs:446] don't initialize cgroups controller, because agent is running in container [2023-10-30 14:31:56.559155 +08:00] INFO [src/rpc/synchronizer.rs:1226] hostname changed from "" to "node-142" [2023-10-30 14:31:56.559240 +08:00] INFO [src/utils/guard.rs:394] guard started [2023-10-30 14:31:56.561282 +08:00] INFO [src/monitor.rs:413] monitor started [2023-10-30 14:31:56.561381 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:149] platform monitoring no extra netns [2023-10-30 14:31:56.561468 +08:00] INFO [src/platform/libvirt_xml_extractor.rs:104] libvirt_xml_extractor started [2023-10-30 14:31:56.561539 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:252] PlatformSynchronizer started [2023-10-30 14:31:56.563990 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:522] Platform information changed to version 1698647517 [2023-10-30 14:31:56.564056 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:771] local version changed to 1698647517 [2023-10-30 14:31:56.564248 +08:00] INFO [src/rpc/synchronizer.rs:190] Update PlatformData version 0 to 1702650412. [2023-10-30 14:31:56.564448 +08:00] INFO [src/rpc/synchronizer.rs:202] Update IpGroups version 0 to 1698646115. [2023-10-30 14:31:56.564496 +08:00] INFO [src/rpc/synchronizer.rs:212] Update FlowAcls version 0 to 2698646315. [2023-10-30 14:31:56.564530 +08:00] INFO [src/rpc/synchronizer.rs:736] Grpc version ip-groups: 1698646115, interfaces, peer-connections and cidrs: 1702650412, flow-acls: 2698646315 [2023-10-30 14:31:56.564561 +08:00] INFO [src/rpc/synchronizer.rs:752] Grpc finish update cost 27.983µs on 0 listener, 1 ip-groups, 51 interfaces, 0 peer-connections, 4 cidrs, 0 flow-acls [2023-10-30 14:31:56.564623 +08:00] INFO [src/rpc/synchronizer.rs:1318] ProxyController update to Some("172.16.232.142"):30035 [2023-10-30 14:31:56.565097 +08:00] INFO [src/config/handler.rs:1542] dispatcher config change from DispatcherConfig { global_pps_threshold: 200000, capture_packet_size: 65535, l7_log_packet_size: 1024, tunnel_type_bitmap: TunnelTypeBitmap( 0, ), trident_type: TtUnknown, vtap_id: 0, capture_socket_type: Auto, extra_netns_regex: "", tap_interface_regex: "^(tap.|cali.|veth.|eth.|en[osipx].|lxc.|lo|[0-9a-f]+_h)$", if_mac_source: IfMac, analyzer_ip: "0.0.0.0", analyzer_port: 30033, proxy_controller_ip: "11.96.190.21", proxy_controller_port: 30035, capture_bpf: "", max_memory: 805306368, af_packet_blocks: 48, af_packet_version: TpacketVersionHighestavailablet, tap_mode: Local, region_id: 0, pod_cluster_id: 0, enabled: true, npb_dedup_enabled: true, } to DispatcherConfig { global_pps_threshold: 200000, capture_packet_size: 65535, l7_log_packet_size: 1024, tunnel_type_bitmap: TunnelTypeBitmap( 6, ), trident_type: TtVmPod, vtap_id: 7, capture_socket_type: Auto, extra_netns_regex: "", tap_interface_regex: "^(tap.|cali.|veth.|eth.|en[osipx].|lxc.|lo|[0-9a-f]+_h)$", if_mac_source: IfMac, analyzer_ip: "172.16.232.142", analyzer_port: 30033, proxy_controller_ip: "172.16.232.142", proxy_controller_port: 30035, capture_bpf: "", max_memory: 805306368, af_packet_blocks: 48, af_packet_version: TpacketVersionHighestavailablet, tap_mode: Local, region_id: 1, pod_cluster_id: 1, enabled: true, npb_dedup_enabled: true, } [2023-10-30 14:31:56.565222 +08:00] INFO [src/config/handler.rs:1612] Rsyslog client connect to 172.16.232.142 30033 [2023-10-30 14:31:56.565265 +08:00] INFO [src/config/handler.rs:1625] stats config change from StatsConfig { interval: 60s, host: "", analyzer_ip: "0.0.0.0", analyzer_port: 30033, } to StatsConfig { interval: 10s, host: "node-142-V3", analyzer_ip: "172.16.232.142", analyzer_port: 30033, } [2023-10-30 14:31:56.565300 +08:00] INFO [src/config/handler.rs:1640] debug config change from DebugConfig { vtap_id: 0, enabled: true, controller_ips: [ 11.96.190.21, ], controller_port: 30035, listen_port: 0, agent_mode: Managed, } to DebugConfig { vtap_id: 7, enabled: true, controller_ips: [ 11.96.190.21, ], controller_port: 30035, listen_port: 0, agent_mode: Managed, } [2023-10-30 14:31:56.565342 +08:00] INFO [src/config/handler.rs:1748] flow_generator config change from FlowConfig { vtap_id: 0, trident_type: TtUnknown, cloud_gateway_traffic: false, collector_enabled: false, l7_log_tap_types: [], capacity: 1048576, hash_slots: 131072, packet_delay: 1s, flush_interval: 1s, flow_timeout: FlowTimeout { opening: 5s, established: 300s, closing: 5s, established_rst: 35s, exception: 5s, closed_fin: 2s, single_direction: 5s, opening_rst: 1s, min: 1s, max: 300s, }, ignore_tor_mac: false, ignore_l2_end: false, l7_metrics_enabled: true, app_proto_log_enabled: false, l4_performance_enabled: true, l7_log_packet_size: 1024, l7_protocol_inference_max_fail_count: 1000, l7_protocol_inference_ttl: 60, packet_sequence_flag: 0, packet_sequence_block_size: 256, l7_protocol_enabled_bitmap: [ Http1, Http2, Custom, DNS, SofaRPC, MySQL, Kafka, Redis, PostgreSQL, Dubbo, MQTT, ], } to FlowConfig { vtap_id: 7, trident_type: TtVmPod, cloud_gateway_traffic: false, collector_enabled: true, l7_log_tap_types: [ ( 0, true, ), ], capacity: 1048576, hash_slots: 131072, packet_delay: 1s, flush_interval: 1s, flow_timeout: FlowTimeout { opening: 5s, established: 300s, closing: 5s, established_rst: 35s, exception: 5s, closed_fin: 2s, single_direction: 5s, opening_rst: 1s, min: 1s, max: 300s, }, ignore_tor_mac: false, ignore_l2_end: false, l7_metrics_enabled: true, app_proto_log_enabled: true, l4_performance_enabled: true, l7_log_packet_size: 1024, l7_protocol_inference_max_fail_count: 1000, l7_protocol_inference_ttl: 60, packet_sequence_flag: 0, packet_sequence_block_size: 256, l7_protocol_enabled_bitmap: [ Http1, Http2, Custom, DNS, SofaRPC, MySQL, Kafka, Redis, PostgreSQL, Dubbo, MQTT, ], } [2023-10-30 14:31:56.565537 +08:00] INFO [src/config/handler.rs:1759] collector config l4_log_store_tap_types change from [] to [(0, true)] [2023-10-30 14:31:56.565562 +08:00] INFO [src/config/handler.rs:1820] collector config change from CollectorConfig { enabled: false, inactive_server_port_enabled: true, inactive_ip_enabled: true, vtap_flow_1s_enabled: true, l4_log_store_tap_types: [], l4_log_ignore_tap_sides: [], l4_log_collect_nps_threshold: 10000, l7_metrics_enabled: true, trident_type: TtUnknown, vtap_id: 0, cloud_gateway_traffic: false, } to CollectorConfig { enabled: true, inactive_server_port_enabled: true, inactive_ip_enabled: true, vtap_flow_1s_enabled: true, l4_log_store_tap_types: [ ( 0, true, ), ], l4_log_ignore_tap_sides: [], l4_log_collect_nps_threshold: 10000, l7_metrics_enabled: true, trident_type: TtVmPod, vtap_id: 7, cloud_gateway_traffic: false, } [2023-10-30 14:31:56.565637 +08:00] INFO [src/config/handler.rs:1832] Platform enabled set to true [2023-10-30 14:31:56.565667 +08:00] INFO [src/config/handler.rs:1909] platform config change from PlatformConfig { sync_interval: 60s, kubernetes_cluster_id: "d-4rbvcALEBK", prometheus_http_api_address: "", libvirt_xml_path: "/etc/libvirt/qemu", kubernetes_poller_type: Adaptive, vtap_id: 0, enabled: false, trident_type: TtUnknown, epc_id: 0, kubernetes_api_enabled: false, kubernetes_api_list_limit: 1000, kubernetes_api_list_interval: 600s, kubernetes_api_memory_trim_percent: Some( 100, ), kubernetes_resources: [], max_memory: 805306368, namespace: None, thread_threshold: 500, tap_mode: Local, os_proc_scan_conf: OsProcScanConfig { os_proc_root: "/proc", os_proc_socket_sync_interval: 10, os_proc_socket_min_lifetime: 3, os_proc_regex: [ ProcessName( ., Accept, "", ), ProcessName( ., Accept, "", ), ], os_app_tag_exec_user: "deepflow", os_app_tag_exec: [], os_proc_sync_enabled: false, os_proc_sync_tagged_only: false, }, } to PlatformConfig { sync_interval: 60s, kubernetes_cluster_id: "d-4rbvcALEBK", prometheus_http_api_address: "", libvirt_xml_path: "/etc/libvirt/qemu/", kubernetes_poller_type: Adaptive, vtap_id: 7, enabled: true, trident_type: TtVmPod, epc_id: 2, kubernetes_api_enabled: false, kubernetes_api_list_limit: 1000, kubernetes_api_list_interval: 600s, kubernetes_api_memory_trim_percent: Some( 100, ), kubernetes_resources: [], max_memory: 805306368, namespace: None, thread_threshold: 500, tap_mode: Local, os_proc_scan_conf: OsProcScanConfig { os_proc_root: "/proc", os_proc_socket_sync_interval: 10, os_proc_socket_min_lifetime: 3, os_proc_regex: [ ProcessName( ., Accept, "", ), ProcessName( ., Accept, "", ), ], os_app_tag_exec_user: "deepflow", os_app_tag_exec: [], os_proc_sync_enabled: false, os_proc_sync_tagged_only: false, }, } [2023-10-30 14:31:56.565773 +08:00] INFO [src/config/handler.rs:1991] sender config change from SenderConfig { mtu: 1500, dest_ip: "0.0.0.0", vtap_id: 0, dest_port: 30033, npb_port: 4789, vxlan_flags: 255, npb_enable_qos_bypass: false, npb_vlan: 0, npb_vlan_mode: None, npb_dedup_enabled: true, npb_bps_threshold: 1000000000, npb_socket_type: RawUdp, collector_socket_type: Tcp, standalone_data_file_size: 200, standalone_data_file_dir: "/var/log/deepflow-agent", server_tx_bandwidth_threshold: 0, bandwidth_probe_interval: 10s, enabled: false, } to SenderConfig { mtu: 1500, dest_ip: "172.16.232.142", vtap_id: 7, dest_port: 30033, npb_port: 4789, vxlan_flags: 255, npb_enable_qos_bypass: false, npb_vlan: 0, npb_vlan_mode: None, npb_dedup_enabled: true, npb_bps_threshold: 1000000000, npb_socket_type: RawUdp, collector_socket_type: Tcp, standalone_data_file_size: 200, standalone_data_file_dir: "/var/log/deepflow-agent", server_tx_bandwidth_threshold: 0, bandwidth_probe_interval: 10s, enabled: true, } [2023-10-30 14:31:56.565826 +08:00] INFO [src/config/handler.rs:2004] handler config change from HandlerConfig { npb_dedup_enabled: true, trident_type: TtUnknown, } to HandlerConfig { npb_dedup_enabled: true, trident_type: TtVmPod, } [2023-10-30 14:31:56.565867 +08:00] INFO [src/config/handler.rs:2066] synchronizer config change from SynchronizerConfig { sync_interval: 60s, ntp_enabled: true, max_escape: 3600s, output_vlan: 0, } to SynchronizerConfig { sync_interval: 60s, ntp_enabled: false, max_escape: 3600s, output_vlan: 0, } [2023-10-30 14:31:56.565897 +08:00] INFO [src/config/handler.rs:2077] ebpf config change from EbpfConfig { collector_enabled: false, l7_metrics_enabled: true, vtap_id: 0, epc_id: 0, l7_log_packet_size: 1024, l7_log_session_timeout: 120s, l7_protocol_inference_max_fail_count: 1000, l7_protocol_inference_ttl: 60, l7_log_tap_types: [], ctrl_mac: 00:00:00:00:00:00, l7_protocol_enabled_bitmap: [ Http1, Http2, Custom, DNS, SofaRPC, MySQL, Kafka, Redis, PostgreSQL, Dubbo, MQTT, ], l7_protocol_ports: { "DNS": "53", }, ebpf: EbpfYamlConfig { disabled: false, log_file: "", kprobe_whitelist: EbpfKprobePortlist { port_list: "", }, kprobe_blacklist: EbpfKprobePortlist { port_list: "", }, uprobe_proc_regexp: UprobeProcRegExp { golang_symbol: "", golang: "", openssl: "", }, thread_num: 1, perf_pages_count: 128, ring_size: 65536, max_socket_entries: 524288, max_trace_entries: 524288, socket_map_max_reclaim: 520000, go_tracing_timeout: 120, io_event_collect_mode: 1, io_event_minimal_duration: 1ms, }, } to EbpfConfig { collector_enabled: true, l7_metrics_enabled: true, vtap_id: 7, epc_id: 2, l7_log_packet_size: 1024, l7_log_session_timeout: 120s, l7_protocol_inference_max_fail_count: 1000, l7_protocol_inference_ttl: 60, l7_log_tap_types: [ ( 0, true, ), ], ctrl_mac: 00:00:00:00:00:00, l7_protocol_enabled_bitmap: [ Http1, Http2, Custom, DNS, SofaRPC, MySQL, Kafka, Redis, PostgreSQL, Dubbo, MQTT, ], l7_protocol_ports: { "DNS": "53", }, ebpf: EbpfYamlConfig { disabled: false, log_file: "", kprobe_whitelist: EbpfKprobePortlist { port_list: "", }, kprobe_blacklist: EbpfKprobePortlist { port_list: "", }, uprobe_proc_regexp: UprobeProcRegExp { golang_symbol: "", golang: "", openssl: "", }, thread_num: 1, perf_pages_count: 128, ring_size: 65536, max_socket_entries: 524288, max_trace_entries: 524288, socket_map_max_reclaim: 520000, go_tracing_timeout: 120, io_event_collect_mode: 1, io_event_minimal_duration: 1ms, }, } [2023-10-30 14:31:56.566061 +08:00] INFO [src/config/handler.rs:2092] trident_type change from TtUnknown to TtVmPod [2023-10-30 14:31:56.566085 +08:00] INFO [src/config/handler.rs:2128] integration collector config change from MetricServerConfig { enabled: false, port: 38086, compressed: false, } to MetricServerConfig { enabled: true, port: 38086, compressed: false, } [2023-10-30 14:31:56.566282 +08:00] INFO [src/trident.rs:1337] platform monitoring no extra netns [2023-10-30 14:31:56.566423 +08:00] INFO [src/sender/uniform_sender.rs:238] stats uniform sender id: 0 started [2023-10-30 14:31:56.566455 +08:00] INFO [src/trident.rs:1353] Start check process... [2023-10-30 14:31:56.572076 +08:00] INFO [src/trident.rs:1357] Start check core file... [2023-10-30 14:31:56.572187 +08:00] WARN [src/utils/environment.rs:284] The core file is configured with pipeline operation, failed to check. [2023-10-30 14:31:56.572222 +08:00] INFO [src/trident.rs:1360] Start check controller ip... [2023-10-30 14:31:56.572241 +08:00] INFO [src/trident.rs:1362] Start check free space... [2023-10-30 14:31:56.572720 +08:00] INFO [src/trident.rs:1386] Agent run with feature-flags: NONE. [2023-10-30 14:31:56.591027 +08:00] INFO [src/rpc/synchronizer.rs:508] Reset version of acls, groups and platform_data. [2023-10-30 14:31:56.591115 +08:00] INFO [src/platform/kubernetes/mod.rs:79] kubernetes poller privileges: set_ns=true read_link_ns=true [2023-10-30 14:31:56.591142 +08:00] INFO [src/platform/kubernetes/mod.rs:89] platform monitoring no extra netns [2023-10-30 14:31:56.593992 +08:00] INFO [src/trident.rs:1513] static analyzer ip: '' actual analyzer ip '172.16.232.142' [2023-10-30 14:31:56.594595 +08:00] INFO [src/dispatcher/mod.rs:1115] Afpacket init with Options { frame_size: 65536, block_size: 1048576, num_blocks: 48, add_vlan_header: false, block_timeout: 64000000, poll_timeout: 100000000, version: TpacketVersionHighestavailablet, socket_type: SocketTypeRaw, iface: "" } [2023-10-30 14:31:56.616973 +08:00] INFO [src/dispatcher/base_dispatcher.rs:633] Decap tunnel type change to VXLAN IPIP [2023-10-30 14:31:56.617245 +08:00] INFO [src/dispatcher/base_dispatcher.rs:701] Npb dedup change to true [2023-10-30 14:31:56.617458 +08:00] INFO [src/dispatcher/base_dispatcher.rs:766] Dispatcher(0) Adding VMs: [00:00:00:00:00:00, fa:16:3e:da:bb:93, ee:ee:ee:ee:ee:ee, ee:ee:ee:ee:ee:ee, ee:ee:ee:ee:ee:ee, ee:ee:ee:ee:ee:ee, ee:ee:ee:ee:ee:ee, ee:ee:ee:ee:ee:ee] [2023-10-30 14:31:56.617583 +08:00] INFO [src/rpc/synchronizer.rs:508] Reset version of acls, groups and platform_data. [2023-10-30 14:31:56.617876 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:532] ebpf collector init... [2023-10-30 14:31:56.617924 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:338] ebpf golang uprobe proc regexp is empty, skip set [2023-10-30 14:31:56.617948 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:354] ebpf openssl uprobe proc regexp is empty, skip set [2023-10-30 14:31:56.617968 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:377] ebpf golang symbol proc regexp is empty, skip set [2023-10-30 14:31:56.617999 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol Http1 parse enabled [2023-10-30 14:31:56.618029 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol Http2 parse enabled [2023-10-30 14:31:56.618051 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol Custom parse enabled [2023-10-30 14:31:56.618073 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol DNS parse enabled [2023-10-30 14:31:56.618095 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol SofaRPC parse enabled [2023-10-30 14:31:56.618117 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol MySQL parse enabled [2023-10-30 14:31:56.618139 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol Kafka parse enabled [2023-10-30 14:31:56.618160 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol Redis parse enabled [2023-10-30 14:31:56.618181 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol PostgreSQL parse enabled [2023-10-30 14:31:56.618204 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol Dubbo parse enabled [2023-10-30 14:31:56.618229 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:382] l7 protocol MQTT parse enabled 2023-10-30 14:31:56 [eBPF] INFO Currently "/proc/sys/net/core/bpf_jit_enable" value is 1, not need set. 2023-10-30 14:31:56 [eBPF] INFO log2_default_hugepage_sz 21 2023-10-30 14:31:56 [eBPF] INFO RLIMIT_NOFILE cur:1048576, rlim_max:1048576 2023-10-30 14:31:56 [eBPF] INFO sys_boot_time_ns : 1698389430479233328 2023-10-30 14:31:56 [eBPF] INFO [deepflow-ebpfctl] sockopt register succeed, type get: 404 - 404 set: 400 - 403 2023-10-30 14:31:56 [eBPF] INFO register_period_event_op 'kick_kern' succeed. 2023-10-30 14:31:56 [eBPF] INFO register_period_event_op 'boot time update' succeed. 2023-10-30 14:31:56 [eBPF] INFO Tracer 'socket-trace', Not Found. 2023-10-30 14:31:56 [eBPF] INFO Tracer 'socket-trace', Not Found. 2023-10-30 14:31:56 [eBPF] INFO Tracer 'socket-trace', Not Found. 2023-10-30 14:31:56 [eBPF] INFO check_kernel_version Linux 5.10.0 2023-10-30 14:31:56 [eBPF] INFO Tracer 'socket-trace', Not Found. 2023-10-30 14:31:56 [eBPF] INFO license: GPL 2023-10-30 14:31:56 [eBPF] INFO Update map ("socket_info_map"), set max_entries 524288 2023-10-30 14:31:56 [eBPF] INFO Update map ("__trace_map"), set max_entries 524288 2023-10-30 14:31:56 [eBPF] INFO BTF vmlinux file: /sys/kernel/btf/vmlinux 2023-10-30 14:31:58 [eBPF] INFO bpf load "socket-trace-bpf-linux-5.2_plus" succeed. 2023-10-30 14:31:58 [eBPF] WARNING: func kernel_struct_field_offset() [user/btf_vmlinux.c:142] BTF member sk_flags_offset of struct sock can not be found 2023-10-30 14:31:58 [eBPF] INFO Offsets from BTF vmlinux: 2023-10-30 14:31:58 [eBPF] INFO copied_seq_offs: 0x654 2023-10-30 14:31:58 [eBPF] INFO write_seq_offs: 0x7d4 2023-10-30 14:31:58 [eBPF] INFO files_offs: 0xc60 2023-10-30 14:31:58 [eBPF] INFO sk_flags_offs: 0x238 2023-10-30 14:31:58 [eBPF] INFO struct_files_struct_fdt_offset: 0x20 2023-10-30 14:31:58 [eBPF] INFO struct_files_private_data_offset: 0xc8 2023-10-30 14:31:58 [eBPF] INFO struct_file_f_inode_offset: 0x20 2023-10-30 14:31:58 [eBPF] INFO struct_inode_i_mode_offset: 0x0 2023-10-30 14:31:58 [eBPF] INFO struct_file_dentry_offset: 0x18 2023-10-30 14:31:58 [eBPF] INFO struct_dentry_name_offset: 0x28 2023-10-30 14:31:58 [eBPF] INFO struct_sock_family_offset: 0x10 2023-10-30 14:31:58 [eBPF] INFO struct_sock_saddr_offset: 0x4 2023-10-30 14:31:58 [eBPF] INFO struct_sock_daddr_offset: 0x0 2023-10-30 14:31:58 [eBPF] INFO struct_sock_ip6saddr_offset: 0x48 2023-10-30 14:31:58 [eBPF] INFO struct_sock_ip6daddr_offset: 0x38 2023-10-30 14:31:58 [eBPF] INFO struct_sock_dport_offset: 0xc 2023-10-30 14:31:58 [eBPF] INFO struct_sock_sport_offset: 0xe 2023-10-30 14:31:58 [eBPF] INFO struct_sock_skc_state_offset: 0x12 2023-10-30 14:31:58 [eBPF] INFO struct_sock_common_ipv6only_offset: 0x60 2023-10-30 14:31:58 [eBPF] INFO [eBPF Kernel Adapt] Set offsets map from btf_vmlinux, success. 2023-10-30 14:31:58 [eBPF] INFO Received limit_size (0), the final value is set to '4096' 2023-10-30 14:31:58 [eBPF] INFO Insert into map('progs_jmp_tp_map'), key 0, program name bpf_prog_tp__data_submit 2023-10-30 14:31:58 [eBPF] INFO Insert into map('progs_jmp_tp_map'), key 1, program name bpf_prog_tpoutput_data 2023-10-30 14:31:58 [eBPF] INFO Insert into map('__progs_jmp_tp_map'), key 2, program name bpf_prog_tpio_event 2023-10-30 14:31:58 [eBPF] INFO Insert into map('progs_jmp_kp_map'), key 0, program name bpf_prog_kp__data_submit 2023-10-30 14:31:58 [eBPF] INFO Insert into map('progs_jmp_kp_map'), key 1, program name bpf_prog_kpoutput_data 2023-10-30 14:31:58 [eBPF] INFO Insert adapt kern uid : 3630272 , 3630311 2023-10-30 14:31:58 [eBPF] INFO tracer(socket-trace) attach ... 2023-10-30 14:31:58 [eBPF] INFO attach enter kprobe: 'kprobe/__sys_sendmsg', success! 2023-10-30 14:31:58 [eBPF] INFO attach enter kprobe: 'kprobe/sys_sendmmsg', success! 2023-10-30 14:31:58 [eBPF] INFO attach enter kprobe: 'kprobe/__sys_recvmsg', success! 2023-10-30 14:31:58 [eBPF] INFO attach enter kprobe: 'kprobe/__sys_recvmmsg', success! 2023-10-30 14:31:58 [eBPF] INFO attach enter kprobe: 'kprobe/do_writev', success! 2023-10-30 14:31:58 [eBPF] INFO attach enter kprobe: 'kprobe/do_readv', success! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_enter_write', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_enter_read', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_enter_sendto', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_enter_recvfrom', succeed! [2023-10-30 14:31:58.536232 +08:00] INFO [src/ebpf_dispatcher/ebpf_dispatcher.rs:545] ebpf collector initialized. [2023-10-30 14:31:58.536914 +08:00] INFO [src/rpc/synchronizer.rs:508] Reset version of acls, groups and platform_data. [2023-10-30 14:31:58.536998 +08:00] INFO [src/utils/logger.rs:208] Logger remote update from [] to [172.16.232.142:30033] by ["172.16.232.142"] 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_socket', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_read', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_write', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_sendto', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_recvfrom', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_sendmsg', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_sendmmsg', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_recvmsg', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_recvmmsg', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_writev', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_exit_readv', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/sched/sched_process_fork', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_enter_getppid', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/syscalls/sys_enter_close', succeed! 2023-10-30 14:31:58 [eBPF] INFO attach tracepoint: 'tracepoint/sched/sched_process_exit', succeed! 2023-10-30 14:31:58 [eBPF] INFO Successfully completed attach. 2023-10-30 14:31:58 [eBPF] INFO thread socket-reader, detached successful. 2023-10-30 14:31:58 [eBPF] INFO register_extra_waiting_op 'offset-infer-server' succeed. 2023-10-30 14:31:58 [eBPF] INFO register_extra_waiting_op 'offset-infer-client' succeed. 2023-10-30 14:31:58 [eBPF] INFO register_period_event_op 'check-map-exceeded' succeed. [2023-10-30 14:31:58.537548 +08:00] INFO [src/trident.rs:2381] Staring agent components. 2023-10-30 14:31:58 [eBPF] INFO register_period_event_op 'check-kern-adapt' succeed. 2023-10-30 14:31:58 [eBPF] INFO [deepflow-ebpfctl] sockopt register succeed, type get: 504 - 504 set: 500 - 503 2023-10-30 14:31:58 [eBPF] INFO [deepflow-ebpfctl] sockopt register succeed, type get: 603 - 603 set: 600 - 602 2023-10-30 14:31:58 [eBPF] INFO All tracers finish!!! 2023-10-30 14:31:58 [eBPF] INFO Received limit_size (8192), the final value is set to '8192' [2023-10-30 14:31:58.537701 +08:00] INFO [src/platform/kubernetes/active_poller.rs:256] starts kubernetes active poller [2023-10-30 14:31:58.537900 +08:00] INFO [src/platform/prometheus/targets.rs:131] prometheus watcher is starting [2023-10-30 14:31:58.538016 +08:00] INFO [src/platform/prometheus/targets.rs:142] prometheus watcher failed to start because prometheus_http_api_address is empty [2023-10-30 14:31:58.538195 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:926] socket info sync start [2023-10-30 14:31:58.538408 +08:00] INFO [src/debug/debugger.rs:386] debugger started

[2023-10-30 14:31:58.538614 +08:00] INFO [src/sender/uniform_sender.rs:238] 3-doc-to-collector-sender uniform sender id: 2 started [2023-10-30 14:31:58.538812 +08:00] INFO [src/sender/uniform_sender.rs:238] 2-protolog-to-collector-sender uniform sender id: 3 started [2023-10-30 14:31:58.539000 +08:00] INFO [src/sender/uniform_sender.rs:238] 3-flowlog-to-collector-sender uniform sender id: 1 started [2023-10-30 14:31:58.539193 +08:00] INFO [src/sender/uniform_sender.rs:238] 2-packet-sequence-block-to-sender uniform sender id: 4 started [2023-10-30 14:31:58.539383 +08:00] INFO [src/flow_generator/packet_sequence/parser.rs:90] packet sequence parser (id=0) started [2023-10-30 14:31:58.539627 +08:00] INFO [src/flow_generator/protocol_logs/parser.rs:692] app protocol logs parser (id=0) started [2023-10-30 14:31:58.539687 +08:00] INFO [src/dispatcher/local_mode_dispatcher.rs:66] Start dispatcher (0) [2023-10-30 14:31:58.539789 +08:00] INFO [src/flow_generator/protocol_logs/parser.rs:692] app protocol logs parser (id=1) started [2023-10-30 14:31:58.539903 +08:00] INFO [src/collector/quadruple_generator.rs:805] new quadruple_generator id: 0, second_delay: 8, minute_delay: 68, l7_metrics_enabled: true, vtap_flow_1s_enabled: true collector_enabled: true [2023-10-30 14:31:58.542306 +08:00] INFO [src/collector/quadruple_generator.rs:725] quadruple generator id: 0 started [2023-10-30 14:31:58.543246 +08:00] INFO [src/collector/flow_aggr.rs:121] l4 flow aggr id: 0 started [2023-10-30 14:31:58.543468 +08:00] INFO [src/collector/collector.rs:1008] second_collector id=(0) started [2023-10-30 14:31:58.543660 +08:00] INFO [src/collector/collector.rs:1008] minute_collector id=(0) started [2023-10-30 14:31:58.543773 +08:00] INFO [src/collector/quadruple_generator.rs:805] new quadruple_generator id: 1, second_delay: 8, minute_delay: 68, l7_metrics_enabled: true, vtap_flow_1s_enabled: true collector_enabled: true [2023-10-30 14:31:58.545979 +08:00] INFO [src/collector/quadruple_generator.rs:725] quadruple generator id: 1 started [2023-10-30 14:31:58.546179 +08:00] INFO [src/collector/collector.rs:1008] second_collector id=(1) started [2023-10-30 14:31:58.546370 +08:00] INFO [src/collector/collector.rs:1008] minute_collector id=(1) started [2023-10-30 14:31:58.546483 +08:00] INFO [src/collector/quadruple_generator.rs:805] new quadruple_generator id: 2, second_delay: 8, minute_delay: 68, l7_metrics_enabled: true, vtap_flow_1s_enabled: true collector_enabled: true [2023-10-30 14:31:58.547391 +08:00] INFO [src/platform/kubernetes/active_poller.rs:93] kubernetes poller updated to version (1) [2023-10-30 14:31:58.548337 +08:00] INFO [src/collector/quadruple_generator.rs:725] quadruple generator id: 2 started [2023-10-30 14:31:58.548570 +08:00] INFO [src/collector/collector.rs:1008] second_collector id=(2) started [2023-10-30 14:31:58.548749 +08:00] INFO [src/collector/collector.rs:1008] minute_collector id=(2) started 2023-10-30 14:31:58 [eBPF] WARNING: func socket_tracer_start() [user/socket.c:1921] [eBPF Kernel Adapt] Adapting the linux kernel(5.10.0-60.18.0.50.oe2203.x86_64) is in progress, please try the start operation again later. 2023-10-30 14:31:58 [eBPF] INFO ctrl_main begin !!! [2023-10-30 14:31:59.448272 +08:00] ERROR [src/main.rs:93] "panicked at 'misaligned pointer dereference: address must be a multiple of 0x8 but is 0x7f2e98000fb1', src/ebpf_dispatcher/ebpf_dispatcher.rs:274:46" Aborted (core dumped)

Are you willing to submit a PR?

Code of Conduct

ZSJGG commented 11 months ago

操作系统的信息是: NAME="openEuler" VERSION="22.03 LTS" ID="openEuler" VERSION_ID="22.03" PRETTY_NAME="openEuler 22.03 LTS" ANSI_COLOR="0;31"

ZSJGG commented 11 months ago

it solved by update to main:a8a06a28