netobserv / flowlogs-pipeline

Transform flow logs into metrics
Apache License 2.0
75 stars 23 forks source link

OCP correctness: validate amount of bytes and count of flows agasint known workload #41

Open eranra opened 2 years ago

eranra commented 2 years ago

Create some workload that sends known logs and a known amount of bytes ... deploy on OCP and observe using flowlogs2metrics. See that the number of bytes and count of flows match expectations.

(From previous work and also work in IBM we know that the information we get on flows might be duplicated and not accurate, this test will prove that we provide the correct numbers when working with OCP)

eranra commented 2 years ago

Creating known workload in one of the pods using PR #45

Comparing OpenShift metrics for single pod bandwidth:

image

With the values we get from flow logs: image

If looks like we are missing logs. Need to debug ASAP !

eranra commented 2 years ago

current results:

$ kubectl logs -l app=flowlogs2metrics  -f  | grep test-workload
Jan 25 13:39:05.927: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:37422]
Jan 25 13:39:05.927: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:37422]
Jan 25 13:41:10.194: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:38064]
Jan 25 13:41:10.194: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:38064]
Jan 25 13:41:10.194: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:38064]
Jan 25 13:43:09.721: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:38698]
Jan 25 13:43:09.721: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:38698]
Jan 25 13:45:10.081: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:39344]
Jan 25 13:45:10.081: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:39344]
Jan 25 13:45:10.081: map[bytes:20800 dstIP:90.130.74.113 dstPort:80 proto:6 srcIP:10.128.4.25 srcK8S_Labels_app:test-workload srcK8S_Name:network-workload srcK8S_Namespace:example-workload srcK8S_Type:pod srcPort:39344]

@jotak I see that we get only egress from pods to the internet no ingress from the internet ... does this make sense?

jotak commented 2 years ago

how did you enable the ovs ipfix exports? using ovn-k / cluster network operator, or something different? I'm actually surprised that it catches the egress with an external IP, I thought it wasn't covered by the current settings (using br-int)

eranra commented 2 years ago

@jotak I will try to morph the test to observe pod to pod traffic just to extend the information ... I was not aware of the fact that we will not get flow logs for external traffic. Any plans to expose flow logs for external traffic in OCP? ( this will be very needed for Hybrid cloud scenarios)

jotak commented 2 years ago

Yes we definitely want to monitor external traffic as well, cf https://issues.redhat.com/browse/NETOBSERV-146 from the discussion we had with @amorenoz @astoycos it shouldn't be a big deal to have it (a matter of turning on ipfix exports on br-ex .. we had a discussion about that on slack previously)

amorenoz commented 2 years ago

@astoycos will know better than I, but from what I understand, capturing on br-ext will allow us to see what goes in/out of the entire cluster. If we just need to see "traffic between pod A and the external world" I think we still can. We just need to add a filter where External_World = any IP that's not (in pod CIDR or in service CIDR or node IP).

eranra commented 2 years ago
kubectl logs flowlogs2metrics-5cd69d76d5-zf4dg | grep pod-to-pod-workload
Jan 26 12:01:41.885: map[bytes:35600 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39378]
Jan 26 12:01:41.885: map[bytes:20800 dstIP:10.128.2.26 dstK8S_Labels_app:iperf-client dstK8S_Name:iperf-client dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:39380 proto:6 srcIP:10.131.2.27 srcK8S_Labels_app:iperf-server srcK8S_Name:iperf-server srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:3000]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:01:58.895: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:39380]
Jan 26 12:04:19.146: map[bytes:20800 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40108]
Jan 26 12:04:19.146: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.146: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.146: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.147: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.147: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.147: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.148: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:04:19.148: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40110]
Jan 26 12:06:43.127: map[bytes:21200 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:40866]
Jan 26 12:09:11.031: map[bytes:20800 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:41616]
Jan 26 12:13:48.754: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:43100]
Jan 26 12:13:48.754: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:43100]
Jan 26 12:13:48.754: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:43100]
Jan 26 12:18:18.466: map[bytes:20800 dstIP:10.128.2.26 dstK8S_Labels_app:iperf-client dstK8S_Name:iperf-client dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:44594 proto:6 srcIP:10.131.2.27 srcK8S_Labels_app:iperf-server srcK8S_Name:iperf-server srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:3000]
Jan 26 12:23:08.995: map[bytes:20800 dstIP:10.128.2.26 dstK8S_Labels_app:iperf-client dstK8S_Name:iperf-client dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:46100 proto:6 srcIP:10.131.2.27 srcK8S_Labels_app:iperf-server srcK8S_Name:iperf-server srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:3000]
Jan 26 12:25:03.214: map[bytes:20800 dstIP:10.128.2.26 dstK8S_Labels_app:iperf-client dstK8S_Name:iperf-client dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:46830 proto:6 srcIP:10.131.2.27 srcK8S_Labels_app:iperf-server srcK8S_Name:iperf-server srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:3000]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
Jan 26 12:25:18.301: map[bytes:3.5604e+06 dstIP:10.131.2.27 dstK8S_Labels_app:iperf-server dstK8S_Name:iperf-server dstK8S_Namespace:pod-to-pod-workload dstK8S_Type:pod dstPort:3000 proto:6 srcIP:10.128.2.26 srcK8S_Labels_app:iperf-client srcK8S_Name:iperf-client srcK8S_Namespace:pod-to-pod-workload srcK8S_Type:pod srcPort:46830]
eranra commented 2 years ago

@jotak @amorenoz @astoycos you can look above ... I improved the code to have ingress workload, egress workload, and pod-to-pod. Above I shared the pod-to-pod ... we can clearly see that for example on 12:18 we miss a lot of flow-logs

Just to explain the test ... The code for pod-to-pod is iperfing client to server for 30 seconds and then waits 120 seconds ... I expected something much more consistent. I understand that the OVS code is aggregating to some level the packets before creating a newflow record ... this is why we get multiple netflows per connection (port) but I do not understand why we are missing flow-logs

REMINDER: Look on PR #45 for the test code

eranra commented 2 years ago

https://github.com/netobserv/flowlogs2metrics/issues/41

Relevant links: -==--==-=-=-=-= http://www.openvswitch.org/support/dist-docs/ovs-vsctl.8.txt

https://github.com/openshift/ovn-kubernetes/blob/master/contrib/kind.sh#L61 https://github.com/openshift/ovn-kubernetes/blob/78a192cc1e6a1925f03712842a521d5dd6bdab03/dist/images/ovnkube.sh#L1207 https://github.com/openshift/ovn-kubernetes/blob/78a192cc1e6a1925f03712842a521d5dd6bdab03/go-controller/pkg/node/node.go#L106 https://github.com/openshift/ovn-kubernetes/blob/78a192cc1e6a1925f03712842a521d5dd6bdab03/go-controller/pkg/node/node.go#L165

https://github.com/openvswitch/ovs/blob/master/ofproto/netflow.c

eranra commented 2 years ago

Some decisions to increase the correctness of information

Tactically we need to add ASAP connection tracking to the code ... we will::

(1) add a field to indicate if a flow is new and when counting use this field - this will increase the accuracy of counting metrics (2) add sequence number tracking and add a field with "delta bytes" based on sequence number - this will increase the accuracy of the information for byte count and bandwidth

Still, we will miss some data due to sampling, this is the best we can get taking into account current NetFlow implementation

Forward:: options and activities: -=--==-=-

(1) work on OVS/OVN and see if we can force TCP flags such as FIN/SYN/RST to force the creation of NetFlow entries (2) Work with alternative collector such as Skydive eBPF that is aware to start and finish of flows (3) Work with OpenFlow and not NETFLOW to realize flows-log data .... more accurate but specific to OVS (4) Foster dynamically increasing sample rate for specific areas that we require extra granularity of information (5) Work with the RH team to get NetFlows also from the external switch so we are able to observe ingress/egress to/from the cluster

FYI ^^^ @jotak @stleerh @mariomac @KalmanMeth @KathyBarabash @ronensc

amorenoz commented 2 years ago

Generally, I think Netflow is probably not the right tool for high-accuracy, conntrack-aware flow statistics. Netflow/IPFIX et.al are not about accuracy, they are used for highly-scalable network topology and traffic trend analysis. They typically sample (or suffer from bad performance) and often do not do header inspection to determine what packets are sampled.

Having said that, OVS does support per-flow sampling which allows you to export Netflow metrix from anything you want. It's based on openflow, so we need to make OVN program it, and ovn-k8s expose it, there's a BZ for investigation in the OVN team: https://bugzilla.redhat.com/show_bug.cgi?id=2038867

If you need conntrack support you probably need to do that with eBPF or some other mechanism, however, note that conntrack zones are manged by OVS as well. Another alternative is to sample at generic places and manually correlate (like what the OCTO did for DNS: https://github.com/redhat-nfvpe/o11y). By the way, OVS now has eBPF tracing points which will allow probes to enrich traces with OVS decisions (e.g: drop, output:N, etc).

eranra commented 2 years ago

@amorenoz thanks for the above :-) very relevant

I think to (1) do the best with the data source we currently have (== use IPFIX) and (2) work on future versions that will be more accurate, maybe based on different/additional data sources.

I think that for most metrics we do not really need very high accurate information ... we need some level of correctness but it doesn't have to be 100% accurate. maybe the balance will be in some dynamically that customers will be able to configure for subbset of the data. But this is really looking forward

eranra commented 2 years ago

The situation is well described in the thread above and this is not High priority anymore. We are now waiting for the eBPF work to improve the accuracy of flow logs.