open-switch / opx-cps

https://openswitch.net
6 stars 15 forks source link

ACL match on OUT_INTF fails. #65

Closed mcallisterjp closed 6 years ago

mcallisterjp commented 6 years ago

Configuring an ACL entry with a match rule of OUT_INTF does not seem to successfully match packets which exit the router on the specified interface.

Test setup:

Ping B->A in the following setup:

B -------------------> Router ---------------> A
e101-001-0    e101-001-2  e101-001-1      e101-001-0
10.10.14.3    10.10.14.2  10.10.12.2      10.10.12.3

With no ACLs, this works. With the following config, which should drop egress packets on e101-001-1 on the Router, it still works. Pinging in the other direction works too. However the tcpdump on the Router only shows packets traversing from B to A.

Configuration:

(Note, other than the ACL configuration below, nothing was changed after reboot and the system-flow ACL has LOWER priority than the one we configured)

root@OPX:~# cps_get_oid.py base-acl/table
...
------------------------------------------------
base-acl/table/npu-id-list = 0
base-acl/table/id = 73
base-acl/table/stage = 1
base-acl/table/priority = 5
base-acl/table/allowed-match-fields = 5,6,39,40
base-acl/table/name = egress_tester
base-acl/table/size = 0
------------------------------------------------

root@OPX:~# cps_get_oid.py base-acl/entry
...
------------------------------------------------
base-acl/entry/table-name = egress_tester
base-acl/entry/action/PACKET_ACTION_VALUE = 1
base-acl/entry/action/type = 3
base-acl/entry/match/OUT_INTF_VALUE = e101-001-1
base-acl/entry/match/type = 40
base-acl/entry/npu-id-list = 0
base-acl/entry/table-id = 73
base-acl/entry/id = 1
base-acl/entry/priority = 393215
base-acl/entry/name = test
------------------------------------------------

Trace:

TCPdump from A on e101-001-0 shows lots of these:

17:42:04.326713 IP 10.10.14.3 > 10.10.12.3: ICMP echo request, id 1021, seq 16, length 64
17:42:04.326745 IP 10.10.12.3 > 10.10.14.3: ICMP echo reply, id 1021, seq 16, length 64

Traceroute on A:

root@NST-OPX-TEST-012:~# traceroute 10.10.14.3
traceroute to 10.10.14.3 (10.10.14.3), 30 hops max, 60 byte packets
1  10.10.12.2 (10.10.12.2)  0.494 ms  0.536 ms  0.754 ms
2  10.10.14.3 (10.10.14.3)  0.841 ms  0.813 ms  0.897 ms

tcpdump -i any icmp on Router on e101-001-1 shows lots of these: 01:45:34.656996 IP 10.10.14.3 > 10.10.12.3: ICMP echo request, id 1021, seq 13, length 64 but none going the other direction… 10.10.12.3 > 10.10.14.4

IP addr and route configuration:

A:

root@NST-OPX-TEST-012:~# ip addr show e101-001-0
3: e101-001-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:92:bb:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/31 brd 255.255.255.255 scope global e101-001-0
       valid_lft forever preferred_lft forever
    inet 10.10.12.3/24 scope global e101-001-0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe92:46a3/64 scope link
       valid_lft forever preferred_lft forever

root@NST-OPX-TEST-012:~# ip route
default via 172.17.0.1 dev eth0
10.10.12.0/24 dev e101-001-0  proto kernel  scope link  src 10.10.12.3
10.10.14.3 via 10.10.12.2 dev e101-001-0
172.17.0.0/18 dev eth0  proto kernel  scope link  src 172.17.32.11
192.168.0.0/31 dev e101-001-0  proto kernel  scope link  src 192.168.0.1

Router:

root@OPX:~# ip addr
...
38: e101-001-1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 34:17:eb:2c:d7:01 brd ff:ff:ff:ff:ff:ff
    inet 10.10.12.2/24 scope global e101-001-1
       valid_lft forever preferred_lft forever
    inet6 fdd9:edd5:139c:e7d3::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::3617:ebff:fe2c:d701/64 scope link
       valid_lft forever preferred_lft forever
39: e101-001-2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 34:17:eb:2c:d7:02 brd ff:ff:ff:ff:ff:ff
    inet 10.10.14.2/24 scope global e101-001-2
       valid_lft forever preferred_lft forever
    inet6 fdd9:edd5:139c:e7d4::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::3617:ebff:fe2c:d702/64 scope link
       valid_lft forever preferred_lft forever
...

B:

root@NST-OPX-TEST-014:~# ip addr show e101-001-0
3: e101-001-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:92:b6:d8 brd ff:ff:ff:ff:ff:ff
    inet 10.10.14.3/24 scope global e101-001-0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe92:b6d8/64 scope link
       valid_lft forever preferred_lft forever

root@NST-OPX-TEST-014:~# ip route
default via 172.17.0.1 dev eth0
10.10.12.3 via 10.10.14.2 dev e101-001-0
10.10.14.0/24 dev e101-001-0  proto kernel  scope link  src 10.10.14.3
172.17.0.0/18 dev eth0  proto kernel  scope link  src 172.17.32.13
mcallisterjp commented 6 years ago

We wondered if this had something to do with the stage field in the ACL; what is this supposed to do? I had assumed that it referred to whether the ACL rules are checked for the packet (so stage=INGRESS would mean packets originated by the local device would not be checked)

Is there some undocumented dependency between base-acl/table/stage and base-acl/entry/match[IN_INTF] and the other interface/port match rules?

atanu-mandal commented 6 years ago

Can you please try with base-acl/table/stage = 2 (that would set the stage as EGRESS).

mcallisterjp commented 6 years ago

Atanu,

I can try this out, but does that mean there is no way of specifying an entry with ingress and egress interface match rules?

atanu-mandal commented 6 years ago

Please let us know if you have tried the above. OUT_INTF/EGRESS and IN_INTF/INGRESS combination should be working. Let us know if you find any issues.

dimbleby commented 6 years ago

Hi,

Setting base-acl/table/stage = 2 doesn't seem to make this work anyway. Here's my configuration:

cps_get_oid.py base-acl/table
...
base-acl/table/npu-id-list = 0
base-acl/table/id = 5
base-acl/table/stage = 2
base-acl/table/priority = 5
base-acl/table/allowed-match-fields = 5,6,39,40
base-acl/table/name = foo
base-acl/table/size = 0

cps_get_oid.py base-acl/entry
...
base-acl/entry/table-name = foo
base-acl/entry/action/PACKET_ACTION_VALUE = 1
base-acl/entry/action/type = 3
base-acl/entry/match/OUT_INTF_VALUE = e101-002-0
base-acl/entry/match/type = 40
base-acl/entry/npu-id-list = 0
base-acl/entry/table-id = 5
base-acl/entry/id = 1
base-acl/entry/priority = 393215
base-acl/entry/name = fooey

ip addr show dev e101-002-0
12: e101-002-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether ec:f4:bb:fd:53:81 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.1/24 scope global e101-002-0
       valid_lft forever preferred_lft forever
    inet6 fe80::eef4:bbff:fefd:5381/64 scope link
       valid_lft forever preferred_lft forever

and here's a ping being allowed out over that interface:

ping -I e101-002-0 10.10.10.2 -c3
PING 10.10.10.2 (10.10.10.2) from 10.10.10.1 e101-002-0: 56(84) bytes of data.
64 bytes from 10.10.10.2: icmp_seq=1 ttl=64 time=0.852 ms
64 bytes from 10.10.10.2: icmp_seq=2 ttl=64 time=0.708 ms
64 bytes from 10.10.10.2: icmp_seq=3 ttl=64 time=0.697 ms

Please advise!

atanu-mandal commented 6 years ago

The above configuration/interfaces mentioned is quite off from the diagram depicted above. So I am not sure whether it is tested in the same context.

Couple of points

Please let us know the updated topology/interface config/ACL config you have tried recently.

dimbleby commented 6 years ago

Hi Atanu,

I was trying a simpler configuration in the expectation that it wouldn't make any difference - but if we should avoid pings generated from the local host then I'll go back to the slightly more complex setup per Pete's opening description.

However, that just gets us back to where we started: matching on OUT_INTF fails.

In #63 you said

OUT_INTF should work for both ingress and egress stages

Per the initial report, we have configured with 'ingress' stage and the match is not succeeding.

I have also retried with 'egress' stage - but now I am seeing that the ACE configuration is rejected by CPS:

Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [ev_log_t_ACL:Switch Id: 0], Failed to create config group on unit no[0] err_str Feature unavailable
Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [ev_log_t_ACL:Switch Id: 0], Failed to setup and create group config in BCM
Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [ev_log_t_ACL:Switch Id: 0], Failed to create group with config
Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [ev_log_t_ACL:Switch Id: 0], ACL Table Creation failed
Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [ev_log_t_ACL:Switch Id: 0], Table Creation failed for ACL Rule with priority 393215 in Table Id 0x7000000000002
Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [NDI:NDI-ACL], Create ACL Entry failed in SAI -2
Mar  8 14:03:11 OPX2 opx_nas_daemon[576]: [ACL:NAS-ACL], Err_code: 0x94008000, fn: virtual bool nas_acl_entry::push_create_obj_to_npu(npu_id_t, void*) (), NDI ACL Entry Create failed for NPU 0
Mar  8 14:03:11 OPX2 python[3617]: [DSAPI:COMMIT], Failed to commit request at 0 out of 1

I suppose the difference since last time is that I have upgraded some OPX packages?

Please advise:

dimbleby commented 6 years ago

You asked via email for some more details. I think you actually have most of this already, but here goes anyway:

2018-03-09 14:17:46,207 UTC INFO Executing CPS transaction: [{'operation': 'create', 'change': {'data': {'base-acl/table/stage': bytearray(b'\x01\x00\x00\x00'), 'base-acl/table/priority': bytearray(b'\x05\x00\x00\x00'), 'base-acl/table/allowed-match-fields': [bytearray(b"\'\x00\x00\x00"), bytearray(b'(\x00\x00\x00'), bytearray(b')\x00\x00\x00'), bytearray(b'\x05\x00\x00\x00'), bytearray(b'\x06\x00\x00\x00')], 'base-acl/table/name': bytearray(b'egress_tester\x00')}, 'key': '1.47.3080336.3080284.'}}]
2018-03-09 14:17:46,224 UTC INFO Executing CPS transaction: [{'operation': 'create', 'change': {'data': {'base-acl/entry/name': bytearray(b'test\x00'), 'base-acl/entry/match': {'0': {'base-acl/entry/match/OUT_INTF_VALUE': bytearray(b'e101-001-1\x00'), 'base-acl/entry/match/type': bytearray(b'(\x00\x00\x00')}}, 'base-acl/entry/action': {'0': {'base-acl/entry/action/PACKET_ACTION_VALUE': bytearray(b'\x01\x00\x00\x00'), 'base-acl/entry/action/type': bytearray(b'\x03\x00\x00\x00')}}, 'base-acl/entry/table-id': bytearray(b'\x02\x00\x00\x00\x00\x00\x00\x00'), 'base-acl/entry/priority': bytearray(b'\xff\xff\x05\x00')}, 'key': '1.47.3080337.3080194.3080195.'}}]
atanu-mandal commented 6 years ago

Thanks for providing the details, will get back to you on this shortly.

atanu-mandal commented 6 years ago

Can you please provide the exact commands you have tried with cps_set_oid.py to set the EGRESS stage. The above command still shows the stage set as INGRESS (i.e. {'base-acl/table/stage': bytearray(b'\x01\x00\x00\x00'))

To debug the issue further...we may want see something similar to this (though I am passing INGRESS stage)

root@OPX:~# cps_set_oid.py -oper create base-acl/entry table-id=4 base-acl/entry/match/type=40 base-acl/entry/match,0,type=40 base-acl/entry/match,0,OUT_INTF_VALUE=e101-001-0 base-acl/entry/action,0,type=3 base-acl/entry/action,0,PACKET_ACTION_VALUE=1 Success Key: 1.47.3080337.3080194.3080195. base-acl/entry/action/PACKET_ACTION_VALUE = 1 base-acl/entry/action/type = 3 base-acl/entry/match/OUT_INTF_VALUE = e101-001-0 base-acl/entry/match/type = 40 base-acl/entry/table-id = 4 base-acl/entry/id = 1 cps/object-group/return-code = 0 base-acl/entry/match/type = 40

To check it is actually programmed into NPU Pass the group# as (table-id# - 1)

root@OPX:~# opx-switch-shell "fp show group 3" GID 3: gid=0x3, instance=0 mode=Single, stage=Ingress lookup=Enabled, ActionResId={3}, pbmp={0x00000000000000000000000000000000000001ffffffffffffffffffffffffff} qset={SrcIp, DstIp, InPort, DstPort, DstTrunk, Stage, StageIngress}, selcodes[0]= { FPF2=0 FPF3=10 Intraslice=Primary slice. {Stage->InPort->StageIngress->SrcIp->DstIp->DstPort->DstTrunk},

     group_priority= 101
     slice_primary =  {slice_number=3, Entry count=512(0x200), Entry free=511(0x1ff)},
     group_status={prio_min=0, prio_max=2147483647, entries_total=2560, entries_free=2559,
                   counters_total=2560, counters_free=2560, meters_total=4096, meters_free=4096}

EID 0x00000029: gid=0x3, slice=3, slice_idx=0, part =0 prio=0, flags=0x10602, Installed, Enabled tcam: color_indep=1, Stage StageIngress DstPort Offset0: 213 Width0: 16 DATA=0x00000280 MASK=0x0000ffff DstTrunk Offset0: 213 Width0: 16 DATA=0x00000280 MASK=0x0000ffff action={act=Drop, param0=0(0), param1=0(0), param2=0(0), param3=0(0)} policer= statistics=NULL

dimbleby commented 6 years ago

Hi Atanu,

Yes, we're still using ingress because - as earlier - you advised that this should work, while we can't even configure egress!

When we configure using egress, everything is the same - except that the stage becomes 2 when creating the table:

2018-03-13 10:48:45,184 UTC INFO Executing CPS transaction: [{'operation': 'create', 'change': {'data': {'base-acl/table/stage': bytearray(b'\x02\x00\x00\x00'), 'base-acl/table/priority': bytearray(b'\x05\x00\x00\x00'), 'base-acl/table/allowed-match-fields': [bytearray(b"\'\x00\x00\x00"), bytearray(b'(\x00\x00\x00'), bytearray(b')\x00\x00\x00'), bytearray(b'\x05\x00\x00\x00'), bytearray(b'\x06\x00\x00\x00')], 'base-acl/table/name': bytearray(b'egress_tester\x00')}, 'key': '1.47.3080336.3080284.'}}]

Here's the output of opx-switch-shell "fp show group 2" (our table-id was 3):

root@OPX2:~# opx-switch-shell "fp show group 2"
GID          2: gid=0x2, instance=0 mode=Single, stage=Ingress lookup=Enabled, ActionResId={2}, pbmp={0x00000000000000000000000000000000000001ffffffffffffffffffffffffff}
         qset={SrcIp, DstIp, InPort, DstPort, DstTrunk, Stage, StageIngress, SrcGport, _SvpValid},
         selcodes[0]=
{
         FPF1=1
         FPF2=0
         IngressEntitySelect=10
         DestinationEntitySelect=5
         Intraslice=Primary slice.
 {_SvpValid->Stage->InPort->StageIngress->SrcGport->SrcIp->DstIp->DstPort->DstTrunk},

         group_priority= 5
         slice_primary =  {slice_number=2, Entry count=512(0x200), Entry free=511(0x1ff)},
         group_status={prio_min=393215, prio_max=2147483647, entries_total=3072, entries_free=3071,
                       counters_total=3072, counters_free=3072, meters_total=4096, meters_free=4096}
EID 0x00000021: gid=0x2,
         slice=2, slice_idx=0, part =0 prio=0x5ffff, flags=0x10602, Installed, Enabled
              tcam: color_indep=1,
 Stage
 StageIngress
 DstPort
    Offset0: 241 Width0: 21
    DATA=0x00100080
    MASK=0x001cffff
 DstTrunk
    Offset0: 241 Width0: 21
    DATA=0x00100080
    MASK=0x001cffff
         action={act=Drop, param0=0(0), param1=0(0), param2=0(0), param3=0(0)}
         policer=
         statistics=NULL
SAI.0>
dimbleby commented 6 years ago

It seems as though what was stopping this working for us, at least on the egress stage, was the inclusion of additional allowed-match-fields on the ACL table.

Is this expected behaviour?

atanu-mandal commented 6 years ago

Hi David, that works. As we have mentioned earlier below steps would handle the same.

e..g. root@OPX:~# python

import nas_acl nas_acl.create_table('EGRESS', 101, ['OUT_INTF']) 17 nas_acl.create_entry(17, 1, {'OUT_INTF': 'e101-001-0'}, {'PACKET_ACTION': 'DROP'})

dimbleby commented 6 years ago

Hi Atanu,

Yes, we have this working now.

But I'd still very much like to understand whether it is expected, and intended, that the additional allowed-match-types of 5 and 6 should prevent matching on outbound interface from working?

(This seems undesirable! So I'm hoping that it's not intended and that you'll want to fix it...)

Thanks

atanu-mandal commented 6 years ago

Hi David,

It shouldn't have any issue with additional allowed-match-types of 5, 6 for EGRESS stage. It is expected to work.

Thanks.

dimbleby commented 6 years ago

Great! So let's leave this open, and covering this problem.

atanu-mandal commented 6 years ago

Are you saying other match types are not working with OUT_INTF ?

Can you please try the below steps and verify. It should work with other filter types too (except SRC_INTF type) in Egress stage.

root@OPX:~# python Python 2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information.

import nas_acl

nas_acl.create_table('EGRESS', 100, ['OUT_INTF', 'SRC_IP', 'DST_IP']) 4

nas_acl.create_entry(4, 1, {'OUT_INTF': 'e101-001-0', 'SRC_IP': {'addr': '1.1.0.0', 'mask': '255.255.0.0'}, 'DST_IP': {'addr': '2.2.2.2'}}, {'PACKET_ACTION': 'DROP'}) 1 nas_acl.print_entry(4) `----------------------------------------

Key

table-id : 4 id : 1

Data

match/SRC_IP_VALUE/addr : 1.1.0.0 match/SRC_IP_VALUE/mask : 255.255.0.0 match/type : SRC_IP match/OUT_INTF_VALUE : e101-001-0 match/type : OUT_INTF match/DST_IP_VALUE/mask : 255.255.255.255 match/DST_IP_VALUE/addr : 2.2.2.2 match/type : DST_IP action/PACKET_ACTION_VALUE : DROP action/type : PACKET_ACTION npu-id-list : [0] priority : 1 ----------------------------------------`

`root@OPX:~# cps_get_oid.py base-acl/table id=4

============base-acl/table==========

base-acl/table/npu-id-list = 0 base-acl/table/id = 4 base-acl/table/stage = 2 base-acl/table/priority = 100 base-acl/table/allowed-match-fields = 5,6,40 base-acl/table/size = 0

root@OPX:~# cps_get_oid.py base-acl/entry table-id=4

============base-acl/entry==========

base-acl/entry/priority = 1 base-acl/entry/match/SRC_IP_VALUE/addr = 01010000 base-acl/entry/match/SRC_IP_VALUE/mask = ffff0000 base-acl/entry/match/type = 5 base-acl/entry/match/OUT_INTF_VALUE = e101-001-0 base-acl/entry/match/type = 40 base-acl/entry/match/DST_IP_VALUE/mask = ffffffff base-acl/entry/match/DST_IP_VALUE/addr = 02020202 base-acl/entry/match/type = 6 base-acl/entry/action/PACKET_ACTION_VALUE = 1 base-acl/entry/action/type = 3 base-acl/entry/npu-id-list = 0 base-acl/entry/table-id = 4 base-acl/entry/id = 1`

Let me know how it goes. Thanks.

dimbleby commented 6 years ago

I am saying that the following configuration fails to block traffic:

>>> import nas_acl
>>> nas_acl.create_table('EGRESS', 100, ['OUT_INTF', 'SRC_IP', 'DST_IP'])
4
>>> nas_acl.create_entry(4, 1, {'OUT_INTF': 'e101-001-1'},  {'PACKET_ACTION': 'DROP'})
1

whereas (continuing the session) this successfully blocks traffic:

>>> nas_acl.delete_entry(4, 1)
>>> nas_acl.delete_table(4)
>>> nas_acl.create_table('EGRESS', 100, ['OUT_INTF'])
5
>>> nas_acl.create_entry(5, 1, {'OUT_INTF': 'e101-001-1'},  {'PACKET_ACTION': 'DROP'})
1

That is: the presence of the additional allowed-match-fields on the table breaks matching on outbound interface.

atanu-mandal commented 6 years ago

I hope you have checked the NPU is configured properly with all the filters i.e." opx-switch-shell "fp show group " , please provide us the output if possible. We would want to check the behavior with same topology and update. Also can you please try removing SRC_IP from the allowed-match-fields and try (only keep OUT_INTF and DST_IP).

dimbleby commented 6 years ago

Have you tried reproducing this? I suppose it would be interesting if you can't reproduce it - what's different about our system and yours? - but more likely that you can reproduce it and then it'll be much more efficient for you to answer such questions yourself.

Meanwhile... I see this:

root@OPX:~# opx-switch-shell "fp show group 4"
GID          4: gid=0x4, instance=0 mode=Single, stage=Egress lookup=Enabled, ActionResId={4}, pbmp={0x00000000000000000000000000000000000001ffffffffffffffffffffffffff}
         qset={SrcIp, DstIp, StageEgress, OutPort},
         selcodes[0]=
{
         FPF3=KEY1
         Intraslice=Primary slice.
 {SrcIp->DstIp->StageEgress->OutPort},

         group_priority= 100
         slice_primary =  {slice_number=0, Entry count=256(0x100), Entry free=255(0xff)},
         group_status={prio_min=1, prio_max=2147483647, entries_total=1024, entries_free=1023,
                       counters_total=1024, counters_free=1024, meters_total=1024, meters_free=1024}
EID 0x00000020: gid=0x4,
         slice=0, slice_idx=0, part =0 prio=0x1, flags=0x10602, Installed, Enabled
              tcam: color_indep=1,
 StageEgress
 OutPort
    Offset0: 193 Width0: 7
    DATA=0x00000001
    MASK=0x0000007f
         action={act=Drop, param0=0(0), param1=0(0), param2=0(0), param3=0(0)}
         policer=
         statistics=NULL
SAI.0>

And the following

>>> import nas_acl
>>> nas_acl.create_table('EGRESS', 100, ['OUT_INTF', 'DST_IP'])
7
>>> nas_acl.create_entry(7, 1, {'OUT_INTF': 'e101-001-1'},  {'PACKET_ACTION': 'DROP'})
1

... also fails to block traffic.

atanu-mandal commented 6 years ago

Thanks David for providing all the details. Off late we have seen issue with SRC_IP filter in EGRESS stage. We will take this up and get back to you with further details.

atanu-mandal commented 6 years ago

Sorry for not getting back to you sooner. I see the same behavior as you have observed. We have found there is an additional filter that needs to be provided with L3 match types, i.e. IP_TYPE.

With below configuration this should block the traffic to egress interface (as we have tested)

import nas_acl nas_acl.create_table('EGRESS', 100, ['OUT_INTF', 'SRC_IP', 'DST_IP', 'IP_TYPE']) 4

nas_acl.create_entry(4, 1, {'OUT_INTF': 'e101-001-0', 'IP_TYPE': 'IP'}, {'PACKET_ACTION': 'DROP'})

Please let me know how it goes.

dimbleby commented 6 years ago

Yes, I can reproduce this - thanks! Happy to close this issue out.

atanu-mandal commented 6 years ago

Closing this.