deepflowio / deepflow

eBPF Observability - Distributed Tracing and Profiling
https://deepflow.io
Apache License 2.0
2.8k stars 308 forks source link

Agent can not work #1441

Closed fanspace closed 1 year ago

fanspace commented 1 year ago

My log [2022-11-10 22:02:40.288903 +08:00] INFO [src/trident.rs:274] static_config Config { controller_ips: [ "10.1.11.231", ], controller_port: 30035, controller_tls_port: 30135, controller_cert_file_prefix: "", log_file: "/var/log/deepflow-agent/deepflow-agent.log", kubernetes_cluster_id: "", vtap_group_id_request: "g-7A1zd7X8EP", controller_domain_name: [], agent_mode: Managed, } [2022-11-10 22:02:40.289874 +08:00] INFO [src/trident.rs:308] ========== DeepFlow Agent start! ========== [2022-11-10 22:02:40.290445 +08:00] INFO [src/trident.rs:317] agent running in Managed mode, ctrl_ip 10.1.11.180 ctrl_mac 0a:bb:58:c3:a6:6e [2022-11-10 22:02:40.292114 +08:00] INFO [src/rpc/synchronizer.rs:758] rpc trigger not running, client not connected [2022-11-10 22:02:40.292127 +08:00] INFO [src/utils/guard.rs:260] guard started [2022-11-10 22:02:40.292151 +08:00] INFO [src/rpc/synchronizer.rs:1197] hostname changed from "" to "docker180" [2022-11-10 22:02:40.294804 +08:00] INFO [src/monitor.rs:394] monitor started [2022-11-10 22:02:40.297192 +08:00] INFO [src/rpc/synchronizer.rs:1305] ProxyController update to Some(10.1.11.231):30035 [2022-11-10 22:02:40.358794 +08:00] INFO [src/config/handler.rs:1269] dispatcher config change from DispatcherConfig { global_pps_threshold: 200000, capture_packet_size: 65535, l7_log_packet_size: 1024, tunnel_type_bitmap: TunnelTypeBitmap( 0, ), trident_type: TtUnknown, vtap_id: 0, capture_socket_type: Auto, extra_netns_regex: "", tap_interface_regex: "^(tap.|cali.|veth.|eth.|en[ospx].|lxc.|lo|[0-9a-f]+_h)$", packet_header_enabled: true, if_mac_source: IfMac, analyzer_ip: 0.0.0.0, analyzer_port: 30033, proxy_controller_ip: 10.1.11.231, proxy_controller_port: 30035, capture_bpf: "", max_memory: 805306368, af_packet_blocks: 48, af_packet_version: TpacketVersionHighestavailablet, tap_mode: Local, region_id: 0, pod_cluster_id: 0, enabled: true, } to DispatcherConfig { global_pps_threshold: 200000, capture_packet_size: 65535, l7_log_packet_size: 1024, tunnel_type_bitmap: TunnelTypeBitmap( 6, ), trident_type: TtUnknown, vtap_id: 0, capture_socket_type: Auto, extra_netns_regex: "", tap_interface_regex: "^(tap.|cali.|veth.|eth.|en[ospx].|lxc.|lo|[0-9a-f]+_h)$", packet_header_enabled: true, if_mac_source: IfMac, analyzer_ip: 0.0.0.0, analyzer_port: 30033, proxy_controller_ip: 10.1.11.231, proxy_controller_port: 30035, capture_bpf: "", max_memory: 805306368, af_packet_blocks: 48, af_packet_version: TpacketVersionHighestavailablet, tap_mode: Local, region_id: 0, pod_cluster_id: 0, enabled: true, } [2022-11-10 22:02:40.358933 +08:00] INFO [src/config/handler.rs:1525] flow_generator config change from FlowConfig { vtap_id: 0, trident_type: TtUnknown, collector_enabled: false, l7_log_tap_types: [], packet_delay: 1s, flush_interval: 1s, flow_timeout: FlowTimeout { opening: 5s, established: 300s, closing: 5s, established_rst: 35s, exception: 5s, closed_fin: 2s, single_direction: 5s, min: 2s, max: 300s, }, ignore_tor_mac: false, ignore_l2_end: false, l7_metrics_enabled: true, app_proto_log_enabled: false, l4_performance_enabled: true, l7_log_packet_size: 1024, l7_protocol_inference_max_fail_count: 50, l7_protocol_inference_ttl: 60, packet_sequence_flag: 0, packet_sequence_block_size: 64, } to FlowConfig { vtap_id: 0, trident_type: TtUnknown, collector_enabled: true, l7_log_tap_types: [ ( 0, true, ), ], packet_delay: 1s, flush_interval: 1s, flow_timeout: FlowTimeout { opening: 5s, established: 300s, closing: 5s, established_rst: 35s, exception: 5s, closed_fin: 2s, single_direction: 5s, min: 2s, max: 300s, }, ignore_tor_mac: false, ignore_l2_end: false, l7_metrics_enabled: true, app_proto_log_enabled: true, l4_performance_enabled: true, l7_log_packet_size: 1024, l7_protocol_inference_max_fail_count: 50, l7_protocol_inference_ttl: 60, packet_sequence_flag: 0, packet_sequence_block_size: 64, } [2022-11-10 22:02:40.359018 +08:00] INFO [src/config/handler.rs:1536] collector config l4_log_store_tap_types change from [] to [(0, true)], will restart dispatcher [2022-11-10 22:02:40.359026 +08:00] INFO [src/config/handler.rs:1565] collector config change from CollectorConfig { enabled: false, inactive_server_port_enabled: true, inactive_ip_enabled: true, vtap_flow_1s_enabled: true, l4_log_store_tap_types: [], l4_log_collect_nps_threshold: 10000, l7_metrics_enabled: true, trident_type: TtUnknown, vtap_id: 0, cloud_gateway_traffic: false, } to CollectorConfig { enabled: true, inactive_server_port_enabled: true, inactive_ip_enabled: true, vtap_flow_1s_enabled: true, l4_log_store_tap_types: [ ( 0, true, ), ], l4_log_collect_nps_threshold: 10000, l7_metrics_enabled: true, trident_type: TtUnknown, vtap_id: 0, cloud_gateway_traffic: false, } [2022-11-10 22:02:40.359041 +08:00] INFO [src/config/handler.rs:1574] Platform enabled set to true [2022-11-10 22:02:40.359045 +08:00] INFO [src/config/handler.rs:1593] platform config change from PlatformConfig { sync_interval: 60s, kubernetes_cluster_id: "", libvirt_xml_path: "/etc/libvirt/qemu", kubernetes_poller_type: Adaptive, vtap_id: 0, enabled: false, ingress_flavour: Kubernetes, trident_type: TtUnknown, source_ip: 10.1.11.180, epc_id: 0, kubernetes_api_enabled: false, namespace: None, thread_threshold: 500, tap_mode: Local, } to PlatformConfig { sync_interval: 60s, kubernetes_cluster_id: "", libvirt_xml_path: "/etc/libvirt/qemu/", kubernetes_poller_type: Adaptive, vtap_id: 0, enabled: true, ingress_flavour: Kubernetes, trident_type: TtUnknown, source_ip: 10.1.11.180, epc_id: 0, kubernetes_api_enabled: false, namespace: None, thread_threshold: 500, tap_mode: Local, } [2022-11-10 22:02:40.359064 +08:00] INFO [src/config/handler.rs:1671] sender config change from SenderConfig { mtu: 1500, dest_ip: 0.0.0.0, vtap_id: 0, dest_port: 30033, vxlan_port: 4789, vxlan_flags: 255, npb_enable_qos_bypass: false, npb_vlan: 0, npb_vlan_mode: None, npb_dedup_enabled: true, npb_bps_threshold: 1000000000, npb_socket_type: RawUdp, compressor_socket_type: RawUdp, collector_socket_type: Tcp, standalone_data_file_size: 200, standalone_data_file_dir: "/var/log/deepflow-agent", server_tx_bandwidth_threshold: 0, bandwidth_probe_interval: 10s, enabled: false, } to SenderConfig { mtu: 1500, dest_ip: 0.0.0.0, vtap_id: 0, dest_port: 30033, vxlan_port: 4789, vxlan_flags: 255, npb_enable_qos_bypass: false, npb_vlan: 0, npb_vlan_mode: None, npb_dedup_enabled: true, npb_bps_threshold: 1000000000, npb_socket_type: RawUdp, compressor_socket_type: Tcp, collector_socket_type: Tcp, standalone_data_file_size: 200, standalone_data_file_dir: "/var/log/deepflow-agent", server_tx_bandwidth_threshold: 0, bandwidth_probe_interval: 10s, enabled: true, } [2022-11-10 22:02:40.359087 +08:00] INFO [src/config/handler.rs:1684] handler config change from HandlerConfig { compressor_socket_type: RawUdp, npb_dedup_enabled: true, trident_type: TtUnknown, } to HandlerConfig { compressor_socket_type: Tcp, npb_dedup_enabled: true, trident_type: TtUnknown, } [2022-11-10 22:02:40.359095 +08:00] INFO [src/config/handler.rs:1751] ebpf config change from EbpfConfig { collector_enabled: false, l7_metrics_enabled: true, vtap_id: 0, epc_id: 0, l7_log_packet_size: 1024, l7_log_session_timeout: 120s, l7_protocol_inference_max_fail_count: 50, l7_protocol_inference_ttl: 60, log_path: "", l7_log_tap_types: [], ctrl_mac: 00:00:00:00:00:00, ebpf-disabled: false, } to EbpfConfig { collector_enabled: true, l7_metrics_enabled: true, vtap_id: 0, epc_id: 0, l7_log_packet_size: 1024, l7_log_session_timeout: 120s, l7_protocol_inference_max_fail_count: 50, l7_protocol_inference_ttl: 60, log_path: "", l7_log_tap_types: [ ( 0, true, ), ], ctrl_mac: 00:00:00:00:00:00, ebpf-disabled: false, } [2022-11-10 22:02:40.359206 +08:00] INFO [src/sender/uniform_sender.rs:220] stats uniform sender id: 100 started [2022-11-10 22:02:40.362394 +08:00] INFO [src/trident.rs:892] Agent run with feature-flags: NONE. [2022-11-10 22:02:40.366499 +08:00] INFO [src/rpc/synchronizer.rs:471] Reset version of acls, groups and platform_data. [2022-11-10 22:02:40.366556 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:122] kubernetes poller privileges: set_ns=true read_link_ns=true [2022-11-10 22:02:40.366564 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:132] platform monitoring no extra netns [2022-11-10 22:02:40.368257 +08:00] INFO [src/trident.rs:1005] static analyzer ip: actual analyzer ip 0.0.0.0 [2022-11-10 22:02:40.368584 +08:00] INFO [src/dispatcher/mod.rs:970] Afpacket init with Options { frame_size: 65536, block_size: 1048576, num_blocks: 48, add_vlan_header: false, block_timeout: 64000000, poll_timeout: 100000000, version: TpacketVersionHighestavailablet, socket_type: SocketTypeRaw, iface: "" } [2022-11-10 22:02:40.396984 +08:00] INFO [src/dispatcher/base_dispatcher.rs:623] Decap tunnel type change to VXLAN IPIP [2022-11-10 22:02:40.397450 +08:00] INFO [src/handler/npb.rs:229] Build with npb packet handler with id: 0 if_index: 1 mac: 00:00:00:00:00:00 [2022-11-10 22:02:40.397474 +08:00] INFO [src/handler/npb.rs:229] Build with npb packet handler with id: 0 if_index: 2 mac: 0a:bb:58:c3:a6:6e [2022-11-10 22:02:40.397483 +08:00] INFO [src/handler/npb.rs:229] Build with npb packet handler with id: 0 if_index: 5 mac: b6:fd:c6:f3:ec:dc [2022-11-10 22:02:40.397490 +08:00] INFO [src/handler/npb.rs:229] Build with npb packet handler with id: 0 if_index: 8 mac: e6:05:83:a5:15:fb [2022-11-10 22:02:40.397497 +08:00] INFO [src/handler/npb.rs:229] Build with npb packet handler with id: 0 if_index: 10 mac: c2:b6:a7:0f:d8:7f [2022-11-10 22:02:40.397504 +08:00] INFO [src/dispatcher/base_dispatcher.rs:733] Adding VMs: [00:00:00:00:00:00, 0a:bb:58:c3:a6:6e, b6:fd:c6:f3:ec:dc, e6:05:83:a5:15:fb, c2:b6:a7:0f:d8:7f] [2022-11-10 22:02:40.397580 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1080] ebpf collector init... [2022-11-10 22:02:40.397601 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:957] ebpf set golang uprobe proc regexp: . [2022-11-10 22:02:40.397633 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:973] ebpf set openssl uprobe proc regexp: . [2022-11-10 22:02:40.397655 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1001] ebpf golang symbol proc regexp is empty, skip set [2022-11-10 22:02:40.397681 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol Http1 parse enabled [2022-11-10 22:02:40.397690 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol Http2 parse enabled [2022-11-10 22:02:40.397694 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol DNS parse enabled [2022-11-10 22:02:40.397698 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol MySQL parse enabled [2022-11-10 22:02:40.397702 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol Kafka parse enabled [2022-11-10 22:02:40.397706 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol Redis parse enabled [2022-11-10 22:02:40.397710 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol PostgreSQL parse enabled [2022-11-10 22:02:40.397714 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol Dubbo parse enabled [2022-11-10 22:02:40.397719 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1006] l7 protocol MQTT parse enabled [2022-11-10 22:02:43.359478 +08:00] ERROR [src/sender/uniform_sender.rs:354] stats sender tcp connection to 0.0.0.0:30033 failed [2022-11-10 22:02:47.858832 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:1090] ebpf collector initialized. [2022-11-10 22:02:47.859207 +08:00] INFO [src/trident.rs:773] Staring components. [2022-11-10 22:02:47.859398 +08:00] INFO [src/platform/libvirt_xml_extractor.rs:99] libvirt_xml_extractor started [2022-11-10 22:02:47.859521 +08:00] INFO [src/pcap/manager.rs:128] started WorkerManager [2022-11-10 22:02:47.859593 +08:00] INFO [src/platform/platform_synchronizer/linux.rs:263] PlatformSynchronizer started [2022-11-10 22:02:47.859625 +08:00] INFO [src/platform/kubernetes/api_watcher.rs:192] ApiWatcher failed to start because kubernetes-cluster-id is empty [2022-11-10 22:02:47.859649 +08:00] INFO [src/platform/kubernetes/active_poller.rs:212] starts kubernetes active poller [2022-11-10 22:02:47.859863 +08:00] INFO [src/debug/debugger.rs:371] debugger started

[2022-11-10 22:02:47.860147 +08:00] INFO [src/sender/uniform_sender.rs:220] 2-doc-to-collector-sender uniform sender id: 1 started [2022-11-10 22:02:47.860315 +08:00] INFO [src/sender/uniform_sender.rs:220] 3-protolog-to-collector-sender uniform sender id: 2 started [2022-11-10 22:02:47.860462 +08:00] INFO [src/sender/uniform_sender.rs:220] 3-flow-to-collector-sender uniform sender id: 0 started [2022-11-10 22:02:47.860606 +08:00] INFO [src/sender/uniform_sender.rs:220] packet_sequence_block-to-sender uniform sender id: 6 started [2022-11-10 22:02:47.860676 +08:00] INFO [src/flow_generator/packet_sequence/parser.rs:84] packet sequence parser (id=0) started [2022-11-10 22:02:47.861025 +08:00] INFO [src/flow_generator/protocol_logs/parser.rs:542] app protocol logs parser (id=0) started [2022-11-10 22:02:47.861071 +08:00] INFO [src/collector/quadruple_generator.rs:721] new quadruple_generator id: 0, second_delay: 8, minute_delay: 68, l7_metrics_enabled: true, vtap_flow_1s_enabled: true collector_enabled: true [2022-11-10 22:02:47.861317 +08:00] INFO [src/dispatcher/local_mode_dispatcher.rs:64] Start dispatcher 0 [2022-11-10 22:02:47.862619 +08:00] ERROR [src/common/flow.rs:1327] invalid trident type, trident will stop


cat .....log | grep ERROR [2022-11-10 22:05:41.291747 +08:00] ERROR [src/sender/uniform_sender.rs:354] stats sender tcp connection to 0.0.0.0:30033 failed [2022-11-10 22:05:46.126348 +08:00] ERROR [src/common/flow.rs:1327] invalid trident type, trident will stop

### MY K8s Server Ip : 10.1.11.231 My Agent cline Ip : 10.1.11.180 Agent server : ubuntu 20.04 (deepflow-agent-1.0-7062.systemd.deb deepflow-agent-1.0-7062.upstart.deb)


cat /etc/deepflow-agent.yaml

controller-ips:


deepflow-ctl agent-group-config list NAME AGENT_GROUP_ID legacy-host-192 g-7A1zd7X8EP


deepflow-ctl agent list deepflow-agent -o yaml null

Nick-0314 commented 1 year ago

Can you provide the agent/server version? And the output of the following two commands

deepflow-ctl agent-group-config list -o yaml
kubectl logs -n deepflow deploy/deepflow-server |grep trident-type
fanspace commented 1 year ago

deepflow-ctl agent-group-config list -o yaml vtap_group_id: g-7A1zd7X8EP platform_enabled: 1


kubectl logs -n deepflow deploy/deepflow-server |grep trident-type Error from server (NotFound): deployments.apps "deepflow-server" not found


kubectl get deploy -n deepflow NAME READY UP-TO-DATE AVAILABLE AGE deepflow-app 1/1 1 1 16d deepflow-grafana 1/1 1 1 16d deepflow-mysql 1/1 1 1 16d


kubectl get pod -n deepflow NAME READY STATUS RESTARTS AGE deepflow-agent-29fm7 1/1 Running 6 16d deepflow-agent-kpp7w 1/1 Running 3 16d deepflow-agent-xzkmv 1/1 Running 5 16d deepflow-app-65889d8c87-5j4xw 1/1 Running 0 9d deepflow-clickhouse-0 1/1 Running 0 4d3h deepflow-grafana-5bdd587898-4xwgd 1/1 Running 0 9d deepflow-mysql-ccc4465db-sxcgn 1/1 Running 0 9d deepflow-server-0 1/1 Running 0 4d3h


kubectl logs deepflow-server-0 -n deepflow | grep trident-type no result kubectl logs deepflow-app-65889d8c87-5j4xw -n deepflow | grep trident-type no result

i install deepflow via All-in-One , and container version is v6.1.4

Nick-0314 commented 1 year ago

There is a bug in version 614. Is it easy to upgrade to the latest version 616? @fanspace

fanspace commented 1 year ago

i got it and will try to upgrade , thx ~~~

fanspace commented 1 year ago

There is a bug in version 614. Is it easy to upgrade to the latest version 616? @fanspace

i upgrade server to v6.1.6 it seems useless

server version
6.1.6 on server kubectl logs deepflow-server-7bf748c9b9-bkljh -n deepflow 2022-11-11 20:12:43.068 [WARN] [trisolaris/synchronize] vtap.go:281 vtap (ctrl_ip: 10.1.11.180, ctrl_mac: 0a:bb:58:c3:a6:6e, host_ips: [10.1.11.180 172.17.0.1 172.18.0.1], kubernetes_cluster_id: , group_id: g-bqErqujVSf) not found in cache. NAME:deepflow-agent-ce REVISION:v6.1.6 7062-1fbf03d5bd8e1e772dfa916ef91010a8566f5a41 BOOT_TIME:1668168583 2022-11-11 20:12:43.068 [INFO] [trisolaris/vtap] vtap.go:1103 start vtap register

2022/11/11 20:12:43 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:236 record not found [1.377ms] [rows:0] SELECT * FROM vtap WHERE ctrl_ip = '10.1.11.180' AND ctrl_mac = '0a:bb:58:c3:a6:6e' ORDER BY vtap.id LIMIT 1 2022-11-11 20:12:43.072 [INFO] [trisolaris/vtap] vtap_discovery.go:613 register vtap: {tapMode:0 vTapGroupID:g-bqErqujVSf defaultVTapGroup:820fed1a-61ac-11ed-95d6-a6522be08f67 vTapAutoRegister:true VTapLKData:{ctrlIP:10.1.11.180 ctrlMac:0a:bb:58:c3:a6:6e hostIPs:[10.1.11.180 172.17.0.1 172.18.0.1 10.1.11.180] host:docker180 region:ffffffff-ffff-ffff-ffff-ffffffffffff}}

2022/11/11 20:12:43 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:78 record not found [1.185ms] [rows:0] SELECT * FROM host_device WHERE ip IN ('10.1.11.180','172.17.0.1','172.18.0.1','10.1.11.180') AND host_device.deleted_at IS NULL ORDER BY host_device.id LIMIT 1 2022-11-11 20:12:43.074 [ERRO] [trisolaris/vtap] vtap_discovery.go:173 vtap(10.1.11.180-0a:bb:58:c3:a6:6e) query host_device failed from host_ips([10.1.11.180 172.17.0.1 172.18.0.1 10.1.11.180]), err: record not found

2022/11/11 20:12:43 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:150 record not found [1.018ms] [rows:0] SELECT * FROM host_device WHERE name = 'docker180' AND host_device.deleted_at IS NULL ORDER BY host_device.id LIMIT 1 2022-11-11 20:12:43.075 [ERRO] [trisolaris/vtap] vtap_discovery.go:176 vtap(10.1.11.180-0a:bb:58:c3:a6:6e) query host_device failed from host(docker180), err: record not found 2022-11-11 20:12:43.078 [ERRO] [trisolaris/vtap] vtap_discovery.go:368 vtap(10.1.11.180-0a:bb:58:c3:a6:6e) vinterface_ip([10.1.11.180 172.17.0.1 172.18.0.1 10.1.11.180]) not found 2022-11-11 20:12:43.085 [INFO] [trisolaris/synchronize] ntp.go:37 request ntp proxcy from ip: 10.1.11.180

2022/11/11 20:12:43 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:150 record not found [1.715ms] [rows:0] SELECT * FROM vtap WHERE name = 'docker180-W5' ORDER BY vtap.id LIMIT 1 2022-11-11 20:12:43.088 [INFO] [trisolaris/vtap] vtap_discovery.go:123 finish register vtap (type: 3 name:docker180-W5 ctrl_ip: 10.1.11.180 ctrl_mac: 0a:bb:58:c3:a6:6e launch_server: 10.1.11.180 launch_server_id: 5 vtap_group_lcuuid: a24e9060-d6fe-4f87-a30f-e8e2597f3fb3 az: ce6708ef-78eb-53b4-956b-ea43631fcf43 lcuuid: f4f61202-db2c-5453-af4b-b5c60bc76c34) 2022-11-11 20:12:43.106 [WARN] [trisolaris/vtap] vtap_cache.go:252 vtap(10.1.11.180-0a:bb:58:c3:a6:6e) no license functions 2022-11-11 20:12:43.106 [WARN] [trisolaris/vtap] vtap_cache.go:715 vtap(docker180-W5) not found VPCID

2022-11-11 20:12:45.531 [ERRO] [trisolaris/synchronize] vtap.go:198 vtap(10.1.11.180) has no proxy_controller_ip 2022-11-11 20:12:45.531 [INFO] [trisolaris/synchronize] vtap.go:618 push data ctrl_ip is 10.1.11.233, ctrl_mac is f6:ae:ca:b3:8c:22, host_ips is [10.1.11.233 172.17.0.1 10.96.1.137 10.96.3.82 10.96.2.144 10.96.0.10 10.96.2.35 10.96.0.169 10.96.1.27 10.96.3.126 10.96.0.191 10.96.1.85 10.96.2.160 10.96.3.6 10.96.3.219 10.96.0.237 10.96.3.116 10.96.2.99 10.96.3.54 10.96.3.4 10.96.0.1 10.96.2.117 10.96.1.253 10.96.1.101 10.96.3.43 10.96.1.219 10.96.2.254 10.96.3.76 10.96.0.172 10.96.3.252 10.96.0.105 100.111.55.128], (platform data version 1669171088 -> 1669171087), (acls version 2668168365 -> 2668168365), (groups version 1668168165 -> 1668168165), NAME:deepflow-agent-ce REVISION:v6.1.6 7062-1fbf03d5bd8e1e772dfa916ef91010a8566f5a41 BOOT_TIME:1668163359


on clinet [2022-11-11 20:09:54.690974 +08:00] INFO [src/ebpf_collector/ebpf_collector.rs:811] ebpf collector config change from EbpfConfig { collector_enabled: true, l7_metrics_enabled: true, vtap_id: 0, epc_id: 0, l7_log_packet_size: 1024, l7_log_session_timeout: 120s, l7_protocol_inference_max_fail_count: 50, l7_protocol_inference_ttl: 60, log_path: "", l7_log_tap_types: [ ( 0, true, ), ], ctrl_mac: 0a:bb:58:c3:a6:6e, ebpf-disabled: false, } to EbpfConfig { collector_enabled: true, l7_metrics_enabled: true, vtap_id: 0, epc_id: 0, l7_log_packet_size: 1024, l7_log_session_timeout: 120s, l7_protocol_inference_max_fail_count: 50, l7_protocol_inference_ttl: 60, log_path: "", l7_log_tap_types: [ ( 0, true, ), ], ctrl_mac: 0a:bb:58:c3:a6:6e, ebpf-disabled: false, }. [2022-11-11 20:09:57.702587 +08:00] ERROR [src/sender/uniform_sender.rs:354] 2-doc-to-collector-sender sender tcp connection to 0.0.0.0:30033 failed [2022-11-11 20:10:01.526670 +08:00] ERROR [src/sender/uniform_sender.rs:354] 3-flow-to-collector-sender sender tcp connection to 0.0.0.0:30033 failed

clinet version 7062-1fbf03d5bd8e1e772dfa916ef91010a8566f5a41 Name: deepflow-agent community edition Branch: v6.1.6 CommitId: 1fbf03d5bd8e1e772dfa916ef91010a8566f5a41 RevCount: 7062 Compiler: rustc 1.65.0 (897e37553 2022-11-02) CompileTime: 2022-11-10 01:58:57

Nick-0314 commented 1 year ago

Look at the output

deepflow-ctl agent-group-config list -o yaml
kubectl logs -n deepflow deploy/deepflow-server |grep trident-type
Nick-0314 commented 1 year ago

https://deepflow.yunshan.net/docs/zh/install/legacy-host/#%E6%9B%B4%E6%96%B0-deepflow-server-%E9%85%8D%E7%BD%AE Looks like we need to configure this

configmap:
  server.yaml:
    controller:
      trisolaris:
        trident-type-for-unkonw-vtap: 3  # required
fanspace commented 1 year ago

had done it

cat values-custom.yaml global: allInOneLocalStorage: true configmap: server.yaml: controller: genesis: local_ip_ranges:

Nick-0314 commented 1 year ago

Check to see if the server applies this configuration

kubectl logs -n deepflow deploy/deepflow-server |grep trident-type
fanspace commented 1 year ago

Check to see if the server applies this configuration

kubectl logs -n deepflow deploy/deepflow-server |grep trident-type

kubectl logs -n deepflow deploy/deepflow-server |grep trident-type trident-type-for-unkonw-vtap: 3

deepflow-ctl agent-group-config list -o yaml vtap_group_id: g-bqErqujVSf platform_enabled: 1

Nick-0314 commented 1 year ago

方便加下readme下面的微信吗 微信沟通