Closed egwakim closed 5 years ago
Could we have a short meeting about this?
Ok, I sent email to your gmail id.
What about next week? I’m on my way from Cisco Live
Hanoh
On Thu, 14 Jun 2018 at 12:07 Gwangmoon Kim notifications@github.com wrote:
Ok, I sent email to your gmail id.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/cisco-system-traffic-generator/trex-core/issues/115#issuecomment-397242956, or mute the thread https://github.com/notifications/unsubscribe-auth/AMbjvVsOGPd84418xCIyRq2aI8AUo-xyks5t8jXOgaJpZM4Ujw3_ .
-- Hanoh Sent from my iPhone
Ok, I have available time slot on 19-Jun and 20-Jun.
20-Jun is ok with me.
On Fri, 15 Jun 2018 at 2:51 Gwangmoon Kim notifications@github.com wrote:
Ok, I have available time slot on 19-Jun and 20-Jun.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/cisco-system-traffic-generator/trex-core/issues/115#issuecomment-397471780, or mute the thread https://github.com/notifications/unsubscribe-auth/AMbjvflMu9WKFql2wYsd0T2TOZwz9O2_ks5t8vb5gaJpZM4Ujw3_ .
-- Hanoh Sent from my iPhone
Ok, I sent invitation email.
Let me summarize our discussion: There are two ways to solve this 1) Add hardware filters 2) Move to software model which is more flexible, but has less performance
Decided to start with #2 The tasks:
BPF does not support callbacks-actions (it is only a binary match/no match, only eBPF can do that) From a performance perspective it would be beneficial to merge the rules for example rule1 (ip.src= 10.0.0.1), action ,counter =1 rule2 (tcp.dport =80), action ,counter =2
rule2 will repeat the eth(0x0800)/ip (skip) of rule#1 while we could have one tree with many exit points. But it is not supported.
Optimization: 1) we could add a way for fast skip (merge all the rules and if it does not match, do not continue to run all the rules) 2) Consider moving to eBPF (https://github.com/iovisor/ubpf) which support actions callback and hash (beneficial in case that we have scale of the same rules type ip=10.0.0.1 cnt=1, ip=10.0.0.2 cnt=2, ip=10.0.0.7 cnt=3, ip=10.0.0.11 cnt=4, etc)
I would start with Python API and running it on Rx core them move to scale/performance/eBPF any thought?
Hi Hanoh, There is issue on this approach. The problem is that current H/W filters drops all ethernet packets except IP packets. Therefore, these packets(custom header packet) will not input to BPF filter.
Currently 4 kinds of H/W filter drivers(1G,10G,40G,82559) to modify and these are done by DPDK APIs.
The idea of "Scale of Rx" task that there would be a different way to load the server. In this mode all the packets will be seen by DP cores. in case there are no rules, DP cores will just drop the packet.
Another alternative is to use the pass-all filter (implemented) that will work in conjunction of RSS configuration. The last mode will be backward compatible to what the current way STL is working
Ok, I see. "--software" option requires restart of TRex server, it's really inconvenient for users. Capturing is not the normal case, it's special debug case, it should not impact on normal cases. And "Scale of Rx" requires more total CPU resource and it's additional cost in user point view. Let's think of how tcpdump works. tcpdump enables promiscous mode when start and disable it when exit. Like tcpdump. I think it's better do it when capture start.
How about your opinion?
As I said another option is: "Another alternative is to use the pass-all filter (implemented) that will work in conjunction of RSS configuration. The last mode will be backward compatible to what the current way STL is working"
this will work the same but give the option to scale some functionality of the Rx-core. on platform that does not support HW filter at all (virtio/vmxnet3/SR-IOV) the only option is to work in multi-core (DP path)
BTW, maybe I've missed it. From your requirement respective, is it enough to filter only by ether-type value? If yes, this would be pretty easy to implement as an exception path for all drivers and only for capturing. The scope of my proposal is broader and more complicated to implement. eBPF is required for the more general solution (mainly for performance)
Yes, The user requirement was "capture custom ethernet type without performance degradation". I think we can add it on BPF filter.
For example, When user specify in BPF filter in TRex, "service> capture monitor ether proto 0xabcd", Add "0xabcd" ethernet type to H/W filter(1G,10G,40G). Then, we can capture this packet with minimum overhead and no additional API needed.
Hi Hanoh, I checked the current code, it have some issue when loading profile. If we add filter in capturing phase, we cannot load profile.
I think, the original approach(Adding "custom_packet_types") seems better way for the current requirement scope("capture custom ethernet type without performance degradation and without restarting TRex server")
How about your opinion?
Yes, I agree. The HW filter support per ether type is already there for most of the drivers. We are using it to forward all to Rx core (service mode)
Today we have:
Let’s add another mode.
“Service filter mode” and give the ability (API) to add/remove ether-types and maybe more in future
Can you say which ether-types are you interested with? Thanks, Hanoh
On Tue, 26 Jun 2018 at 12:39 Gwangmoon Kim notifications@github.com wrote:
Hi Hanoh, I checked the current code, it have some issue when loading profile. If we add filter in capturing phase, we cannot load profile.
I think, the original approach(Adding "custom_packet_types") seems better way for the current requirement scope("capture custom ethernet type without performance degradation and without restarting TRex server")
How about your opinion?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/cisco-system-traffic-generator/trex-core/issues/115#issuecomment-400246335, or mute the thread https://github.com/notifications/unsubscribe-auth/AMbjvRG-EmeqGAa2h47wNwf61IcwKdMPks5uAgE3gaJpZM4Ujw3_ .
-- Hanoh Sent from my iPhone
Hi Hanoh, Good suggestion, but, it have problem when adding stream to custom packet. Our ethernet packet type is non-standard 0xeb??, adding stream will fail in flow_stat_parser.
That's why I suggested implementation #1 to avoid loading failure. And filter should be added before users trying to add streams. To avoid such situation, I think it's better to add filter in trex config file.
Syntax : In trex_cfg.yaml custom_packet_types : { ethernet : "0xABCD" } Description : Define custom packet types to capture user defined ethernet packet.
Hi,
You said that requirement is just to capture the traffic? Why would you want to have statistic per this stream.
The reason we do this verification is that only a specific set of packets match the HW filter and this is a verification of that.
I suggest not to mix per-stream stats and capturing capability.
If you want to enhance stream-stats, let's go back to my general suggestion which is more complex but more general with BPF/eBFP
see here for HW filters per ether-type
https://github.com/cisco-system-traffic-generator/trex-core/blob/master/src/main_dpdk.cpp#L151
thanks,
Hanoh
On Wed, Jun 27, 2018 at 7:31 AM, Gwangmoon Kim notifications@github.com wrote:
Hi Hanoh, Good suggestion, but, it have problem when adding stream to custom packet. Our ethernet packet type is non-standard 0xeb??, adding stream will fail in flow_stat_parser. See example log here. ---Example log ------------- trex>start -f stl/myL2_1pkt.py -p 0 Attaching 1 streams to port(s) [0]: [FAILED] Error loading profile 'stl/myL2_1pkt.py' Port 0 : *** Failed parsing given packet for flow stat. NIC does not support given L2 header type
That's why I suggested implementation #1 https://github.com/cisco-system-traffic-generator/trex-core/pull/1 to avoid loading failure. And filter should be added before users trying to add streams. To avoid such situation, I think it's better to add filter in trex config file.
Syntax : In trex_cfg.yaml custom_packet_types : { ethernet : "0xABCD" } Description : Define custom packet types to capture user defined ethernet packet. Implementation #1 https://github.com/cisco-system-traffic-generator/trex-core/pull/1. If custom ethernet types defined, check custom ehternet type when loading profile instead of returning error with FSTAT_PARSER_E_UNKNOWN_HDR src/flow_stat_parser.cpp Implementation #2 https://github.com/cisco-system-traffic-generator/trex-core/pull/2. If custom ethernet types defined, add custom ethernet type filter to in set_rcv_all for all NIC types(1G,10G,40G) src/main_dpdk.cpp
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/cisco-system-traffic-generator/trex-core/issues/115#issuecomment-400539519, or mute the thread https://github.com/notifications/unsubscribe-auth/AMbjvRvOK_7L_aE-af_HQe5IjhWLZWQuks5uAwqrgaJpZM4Ujw3_ .
-- Hanoh Sent from my iPhone
Hi, When we add new filter, I did not think of any new limitations for users. I thought it should work for both cases(per stream stat enabled or disabled). That's why we tried to change flow_stat code.
Your original suggestion seems good, but, it may take some time. I need to check whether we can delay the feature. BTW, If we implement your original suggestion, which part should I implement? and which part will you implement?
Thanks Gwangmoon
Hi Gwangmoon,
It is still not clear to me if the is strong requirement is to count "ether-type" streams?
I think it will be awkward to add this logic to per stream stats as you tried to do.
Per stream stats original objective was to give a very simple and quick solution for mapping the stream to counters using ipv4.id or ipv6.id. This solution is not scalable and not perfect, but solve 90% of the requirements
For more general counting, better to add support for rules by software/HW without the association.
Let me summarize:
If it answers your need I would start with the simplest solutions to add another service-HW filter mode and decouple the capture from per stream stats
If it doesn't answer, let's review the first solution. We won't have time to help there.
thanks Hanoh
On Thu, Jun 28, 2018 at 11:24 AM, Gwangmoon Kim notifications@github.com wrote:
Hi, When we add new filter, I did not think of any new limitations for users. I thought it should work for both cases(per stream stat enabled or disabled). That's why we tried to change flow_stat code.
Your original suggestion seems good, but, it may take some time. I need to check whether we can delay the feature. BTW, If we implement your original suggestion, which part should I implement? and which part will you implement?
Thanks Gwangmoon
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/cisco-system-traffic-generator/trex-core/issues/115#issuecomment-400954307, or mute the thread https://github.com/notifications/unsubscribe-auth/AMbjvSOVfUw4j7cZaR4TxPw9UDwM3Dhyks5uBJLFgaJpZM4Ujw3_ .
-- Hanoh Sent from my iPhone
Hi Hanoh, I discussed it with users, "per stream stat" is not strong requirement. Let's go with the simplest solution(service-HW filter) without "per stream stat".
Thanks Gwangmoon
@egwakim Is there a plan to work on that?
Hi Hanoh, Jeongseok merged required change for it. https://github.com/cisco-system-traffic-generator/trex-core/commit/ffb83f1499fb03f76ad8beb4a4b29bbd72a64b74 With this change, We can 1) Send custom ethernet packets 2) Capture custom ethernet packets 3) Get statistics per pg_id now.
I think we can close this ticket.
Can you review whether it's acceptable design in TRex?
If it's acceptable we want to push it to TRex.
Syntax In trex_cfg.yaml custom_packet_types : { ethernet : "0xABCD" }
Description : Define custom packet types to capture user defined ethernet packet.
1) trex_cfg.yaml vs json_rpc commands ?
2) custom_packet_types vs custom_ethernet_types ?
3) number of custom packet types : Multiple vs Single
2) If custom ethernet types defined, add custom ethernet type filter to in set_rcv_all for all NIC types(1G,10G,40G) src/main_dpdk.cpp