inverse-inc / packetfence

PacketFence is a fully supported, trusted, Free and Open Source network access control (NAC) solution. Boasting an impressive feature set including a captive-portal for registration and remediation, centralized wired and wireless management, powerful BYOD management options, 802.1X support, layer-2 isolation of problematic devices; PacketFence can be used to effectively secure networks small to very large heterogeneous networks.
https://packetfence.org
GNU General Public License v2.0
1.37k stars 287 forks source link

PF 8.3.0+OpenVAS. Scanning won't start #3977

Open becchett opened 5 years ago

becchett commented 5 years ago

Dear Inverse, let me explain this case because I think it's a bug , so please if you have some time read this post.

Server is: CentOS Linux release 7.6.1810 (Core) Linux pfsrv 3.10.0-957.1.3.el7.x86_64 8GB Ram , 4 core, ethernet (management) plus some vlan (registration,isolation,inline) PF8.3.0 plus pf-addon.pl.

PF is configured to manage some vlans and some networks (inline mode) one of these is a cabled network with 802.1x authentication and radius, it works fine but I need to add compliance feature so I've installed a new server with OpenVAS. I choose it because it is an opensource solution and because I read that it works again since 8.3.0. In particular I need to scan devices after they are logged into my net and in case of violation it must be moved to isolation vlan. PF is configured with dhcp and dns server and it is a default gateway for my networks.

These are my most important config files.

PF.CONF: ...omissis... [interface eth0.25] enforcement=inlinel2 ip=10.25.0.1 type=internal mask=255.255.0.0

[interface eth0.26] enforcement=inlinel2 ip=10.26.0.1 type=internal mask=255.255.0.0

[interface eth0.27] enforcement=inlinel2 ip=10.27.0.1 type=internal mask=255.255.0.0

[interface eth0.28] enforcement=vlan ip=10.28.0.1 type=internal mask=255.255.0.0

[interface eth0.29] enforcement=vlan ip=10.29.0.1 type=internal mask=255.255.0.0

[interface eth0] ip=10.0.0.34 type=management mask=255.255.0.0 ....

AUTHENTICATION.CONF: ..... [RADIUS-AAI] realms= options= <<EOT type = auth+acct response_windows = 8 status_check = status-server revive_interval = 120 check_interval = 30 num_answer_to_alive = 3 src_ipaddr = $src_ip EOT monitor=0 set_access_level_action= timeout=1 secret=XXXXXX port=1812 description=RADIUS-AAI host=10.0.100.26 type=RADIUS

[RADIUS-AAI rule catchall] action0=set_role=default match=all class=authentication action1=set_access_duration=12h description=catchall .....

PROFILE.CONF [PF-CABLED] locale= device_registration=default filter=vlan:25 dot1x_recompute_role_from_portal=0 description=PF-CABLED autoregister=enabled scans=OpenVAS sources=RADIUS-AAI

SWITCHES.CONF: .... [10.0.3.33] deauthMethod=RADIUS description=privsw-3-33 type=HP::Procurve_2500 radiusSecret=XXXXXXX inlineVlan=25,26,27 isolationVlan=28 registrationVlan=29 .....

PROFILE.CONF: .... [PF-CABLED] locale= device_registration=default filter=vlan:25 dot1x_recompute_role_from_portal=0 description=PF-CABLED autoregister=enabled scans=OpenVAS sources=RADIUS-AAI .....

SCAN.CONF: [OpenVAS] openvas_alertid=fe87d0c2-eeef-4d49-a220-e85bb7b002f5 openvas_configid=8715c877-47a0-438d-98a3-27c7a6ab2196 ip=10.0.0.69 openvas_reportformatid=c1645568-627a-11e3-a660-406186ea4fc5 duration= categories= port=9390 registration=0 username=admin post_registration=1 password=XXXXX pre_registration=0 oses= type=openvas

after all my config file please read this test between two servers:

[root@pfsrv conf]# omp -u admin -p 9390 -X "" -h 10.0.0.69 Enter password:

7.0

it seems to be ok !

This is packefence.log:

Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Using sources RADIUS-AAI for matching (pf::authentication::match 2) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Matched rule (catchall) in source RADIUS-AAI, returning actions. (pf::Authentication::Source::match_rule) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Matched rule (catchall) in source RADIUS-AAI, returning actions. (pf::Authentication::Source::match) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Role has already been computed and we don't want to recompute it . Getting role from node_info (pf::role::getRegisteredRole) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Username was defined "becchett@pg.infn.it" - returning role 'def ault' (pf::role::getRegisteredRole) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] PID: "becchett@pg.infn.it", Status: reg Returned VLAN: (undefine d), Role: default (pf::role::fetchRoleForNode) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) WARN: [mac:a8:60:b6:0c:bb:ce] No parameter defaultVlan found in conf/switches.conf for the swi tch 10.0.3.33 (pf::Switch::getVlanByName) Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Request to /api/v1//ipset/unmark_mac?local=0 is unauthorized, wi ll perform a login (pf::api::unifiedapiclient::call) Jan 31 17:07:38 pfsrv pfipset[7584]: t=2019-01-31T17:07:38+0100 lvl=info msg="Syncing to peers" pid=7584 request-uuid=5483e080-2572-11e9-b520-001a4a16017f Jan 31 17:07:38 pfsrv packetfence_httpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] violation 1300003 force-closed for a8:60:b6:0c:bb:ce (pf::violat ion::violation_force_close) Jan 31 17:07:38 pfsrv packetfencehttpd.aaa: httpd.aaa(8222) INFO: [mac:a8:60:b6:0c:bb:ce] Instantiate profile PF-CABLED (pf::Connection::ProfileFactory:: from_profile) Jan 31 17:07:38 pfsrv pfdhcp[7589]: t=2019-01-31T17:07:38+0100 lvl=info msg="DHCPDISCOVER from a8:60:b6:0c:bb:ce (becchetti-nb)" pid=7589 mac=a8:60:b6:0c:b b:ce Jan 31 17:07:38 pfsrv pfdhcp[7589]: t=2019-01-31T17:07:38+0100 lvl=info msg="DHCPOFFER on 10.25.167.225 to a8:60:b6:0c:bb:ce (becchetti-nb)" pid=7589 mac=a 8:60:b6:0c:bb:ce Jan 31 17:07:39 pfsrv pfdhcp[7589]: t=2019-01-31T17:07:39+0100 lvl=info msg="DHCPREQUEST for 10.25.167.225 from a8:60:b6:0c:bb:ce (becchetti-nb)" pid=7589 mac=a8:60:b6:0c:bb:ce Jan 31 17:07:39 pfsrv pfdhcp[7589]: t=2019-01-31T17:07:39+0100 lvl=info msg="DHCPACK on 10.25.167.225 to a8:60:b6:0c:bb:ce (becchetti-nb)" pid=7589 mac=a8:60:b6:0c:bb:ce Jan 31 17:07:39 pfsrv pfqueue: pfqueue(19822) INFO: [mac:a8:60:b6:0c:bb:ce] Instantiate profile default (pf::Connection::ProfileFactory::_from_profile) Jan 31 17:07:39 pfsrv pfqueue: pfqueue(18763) INFO: [mac:unknown] stated changed, adapting firewall rules for proper enforcement (pf::inline::performInlineEnforcement) Jan 31 17:07:39 pfsrv pfqueue: pfqueue(19822) WARN: [mac:a8:60:b6:0c:bb:ce] Use of uninitialized value $added in numeric eq (==) at /usr/local/pf/lib/pf/api.pm line 994. (pf::api::trigger_scan) Jan 31 17:07:39 pfsrv pfqueue: Use of uninitialized value $added in numeric eq (==) at /usr/local/pf/lib/pf/api.pm line 994. Jan 31 17:07:39 pfsrv pfqueue: pfqueue(18763) INFO: [mac:unknown] Request to /api/v1//ipset/unmark_mac?local=0 is unauthorized, will perform a login (pf::api::unifiedapiclient::call) Jan 31 17:07:39 pfsrv pfipset[7584]: t=2019-01-31T17:07:39+0100 lvl=info msg="Syncing to peers" pid=7584 request-uuid=5558d93f-2572-11e9-b520-001a4a16017f Jan 31 17:07:40 pfsrv pfipset[7584]: t=2019-01-31T17:07:40+0100 lvl=info msg="Syncing to peers" pid=7584 request-uuid=555ec8ce-2572-11e9-b520-001a4a16017f Jan 31 17:07:40 pfsrv pfipset[7584]: t=2019-01-31T17:07:40+0100 lvl=info msg="Added 10.25.167.225 a8:60:b6:0c:bb:ce to pfsession_Reg_10.25.0.0" pid=7584 request-uuid=555ec8ce-2572-11e9-b520-001a4a16017f Jan 31 17:07:40 pfsrv pfipset[7584]: t=2019-01-31T17:07:40+0100 lvl=info msg="Added 10.25.167.225 to PF-iL2_ID1_10.25.0.0" pid=7584 request-uuid=555ec8ce-2572-11e9-b520-001a4a16017f

The problem is that scan didn't start !

If I look inside this source, lib/pf/api.pm:

sub trigger_scan :Public :Fork :AllowedAsAction($ip, mac, $mac, nettype, TYPE) { my ($class, %postdata ) = @; my @require = qw(ip mac nettype); my @found = grep {exists $postdata{$}} @require; return unless pf::util::validate_argv(\@require, \@found);

return unless scalar keys %pf::config::ConfigScan;
my $logger = pf::log::get_logger();
my $added;
# post_registration (production vlan)
# We sleep until (we hope) the device has had time issue an ACK.
$logger->info("trigger_run_scan 1");
if (pf::util::is_prod_interface($postdata{'net_type'})) {
    my $profile = pf::Connection::ProfileFactory->instantiate($postdata{'mac'});
    my $scanner = $profile->findScan($postdata{'mac'});
    if (defined($scanner) && pf::util::isenabled($scanner->{'_post_registration'})) {
            $logger->info("trigger_run_scan 2");
        $added = pf::violation::violation_add( $postdata{'mac'}, $pf::constants::scan::POST_SCAN_VID );
    }
    $logger->info("trigger_run_scan 3");
    return if ($added == 0 || $added == -1);
    sleep $pf::config::Config{'fencing'}{'wait_for_redirect'};
    pf::scan::run_scan($postdata{'ip'}, $postdata{'mac'}) if ($added ne $pf::constants::scan::POST_SCAN_VID);
    $logger->info("trigger_run_scan 4");
}
else {
    my $profile = pf::Connection::ProfileFactory->instantiate($postdata{'mac'});
    my $scanner = $profile->findScan($postdata{'mac'});
    # pre_registration
    $logger->info("trigger_run_scan 5");
    if (defined($scanner) && pf::util::isenabled($scanner->{'_pre_registration'})) {
        $logger->info("trigger_run_scan 57");
        $added = pf::violation::violation_add( $postdata{'mac'}, $pf::constants::scan::PRE_SCAN_VID );

    }
    $logger->info("trigger_run_scan 55");
    return if ($added == 0 || $added == -1);
    $logger->info("trigger_run_scan 56");
    sleep $pf::config::Config{'fencing'}{'wait_for_redirect'};
    $logger->info("trigger_run_scan 6");
    pf::scan::run_scan($postdata{'ip'}, $postdata{'mac'}) if  ($added ne $pf::constants::scan::PRE_SCAN_VID && $added ne $pf::constants::scan::SCAN_VID);
    $logger->info("trigger_run_scan 7");
}
 $logger->info("trigger_run_scan 8");
return;

}

after added some extra logging messages I saw only "trigger_run_scan" 1 , 5 and 55,

Any ideas ? Thanks a lot Best Regards Enrico

julsemaan commented 5 years ago

Is the endpoint in an inline VLAN ?

becchett commented 5 years ago

Il 01/02/2019 15:12, Julien Semaan ha scritto:

Is the endpoint in an inline VLAN ?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/inverse-inc/packetfence/issues/3977#issuecomment-459734907, or mute the thread https://github.com/notifications/unsubscribe-auth/ALlpdVJMCIDxlLgT6deIoBbKflbpKKnuks5vJEtNgaJpZM4adEa7.

[root@pfsrv pf]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 3: eth0.101@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 4: eth0.25@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 5: eth0.26@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 6: eth0.27@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 7: eth0.28@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 8: eth0.29@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff 9: eth0.30@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff

[root@pfsrv pf]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.0.0.34/16 brd 10.0.255.255 scope global eth0        valid_lft forever preferred_lft forever 3: eth0.101@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 1.1.1.1/24 brd 193.205.222.255 scope global eth0.101        valid_lft forever preferred_lft forever 4: eth0.25@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.25.0.1/16 brd 10.25.255.255 scope global eth0.25        valid_lft forever preferred_lft forever 5: eth0.26@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.26.0.1/16 brd 10.26.255.255 scope global eth0.26        valid_lft forever preferred_lft forever 6: eth0.27@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.27.0.1/16 brd 10.27.255.255 scope global eth0.27        valid_lft forever preferred_lft forever 7: eth0.28@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.28.0.1/16 brd 10.28.255.255 scope global eth0.28        valid_lft forever preferred_lft forever 8: eth0.29@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.29.0.1/16 brd 10.29.255.255 scope global eth0.29        valid_lft forever preferred_lft forever 9: eth0.30@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000     link/ether 00:1a:4a:16:01:7f brd ff:ff:ff:ff:ff:ff     inet 10.30.0.1/16 brd 10.30.255.255 scope global eth0.30        valid_lft forever preferred_lft forever

Thanks !!!

--


Enrico Becchetti Servizio di Calcolo e Reti

Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it


becchett commented 5 years ago

my NETWORK.CONF:

[10.25.0.0] dns=193.205.222.2 split_network=disabled dhcp_start=10.25.0.10 gateway=10.25.0.1 domain-name=wired.local nat_enabled=enabled named=enabled dhcp_max_lease_time=31536000 fake_mac_enabled=disabled dhcpd=enabled dhcp_end=10.25.255.246 type=inlinel2 netmask=255.255.0.0 dhcp_default_lease_time=31536000

[10.26.0.0] dns=193.205.222.2 split_network=disabled dhcp_start=10.26.0.10 gateway=10.26.0.1 domain-name=infn-dot1x.local nat_enabled=enabled named=enabled dhcp_max_lease_time=31536000 fake_mac_enabled=disabled dhcpd=enabled dhcp_end=10.26.255.246 type=inlinel2 netmask=255.255.0.0 dhcp_default_lease_time=31536000

[10.27.0.0] dns=193.205.222.2 split_network=disabled dhcp_start=10.27.0.10 gateway=10.27.0.1 domain-name=infn-web.local nat_enabled=enabled named=enabled dhcp_max_lease_time=31536000 fake_mac_enabled=disabled dhcpd=enabled dhcp_end=10.27.255.246 type=inlinel2 netmask=255.255.0.0 dhcp_default_lease_time=31536000

[10.28.0.0] dns=10.28.0.1 split_network=disabled dhcp_start=10.28.0.10 gateway=10.28.0.1 domain-name=isolation.local nat_enabled=disabled named=enabled dhcp_max_lease_time=31536000 fake_mac_enabled=disabled dhcpd=enabled dhcp_end=10.28.255.246 type=vlan-isolation netmask=255.255.0.0 dhcp_default_lease_time=30

[10.29.0.0] dns=10.29.0.1 split_network=disabled dhcp_start=10.29.0.10 gateway=10.29.0.1 domain-name=registration.local nat_enabled=disabled named=enabled dhcp_max_lease_time=30 fake_mac_enabled=disabled dhcpd=enabled dhcp_end=10.29.255.246 type=vlan-registration netmask=255.255.0.0 dhcp_default_lease_time=30

do you need any other information ? Thanks a lot Best Regards Enrico

julsemaan commented 5 years ago

Is the endpoint in an inline VLAN ?

becchett commented 5 years ago

Julsemaan I can't understand your question. Perphas endpoint means gateway ? In this case I confirm that PF server is a gateway for vlans and it's configured with inline. I apologize for this misunderstanding. Best regards. Enrico

julsemaan commented 5 years ago

and is the endpoint you want to scan in your inline network ?

becchett commented 5 years ago

yes, if the endpoint is user's device that is connected to inline network. OpenVas is a separate server but is in the same management network of PF and has got all vlans to reach inline user's device. Thanks Enrico

julsemaan commented 5 years ago

So I took a look at the code and discussed with @fdurand, and for inline VLANs, scanning on post-registration is not possible. This is due to the fact that the device stays in the same VLAN and that this VLAN is used for reg and unreg devices.

Although it would be possible to add support for this, this is not in our priority roadmap especially given the large amount of work our team is doing to deliver version 9.0

In the event this is a show-stopper for you and you absolutely need the post-reg scan to work, you could get in touch with Inverse to sponsor the development of this.

In the meanwhile, I've added a note on the post-reg scan field to specify this isn't possible in inline.

julsemaan commented 5 years ago

I'll leave this issue opened to track the fact post-reg scans don't work in inline

becchett commented 5 years ago

Dear Julien, as I wrote above I'm looking for scanner, OpenVAS, because I need to know If devices that are connected to my networks are dangerous (high risk) or not. In case of a positive result PF must notify me with an email and if possible it must move the client to the isolation vlan.

So after changing lib/pf/api.pm and lib/pf/util.pm: sub is_prod_interface { return $TRUE; }

The purpose of these changes is to always activate the scanner.

Looking inside packetfence.log I saw that the scanner starts and finishes after a few seconds but I didn't see any result inside PF dashboard. Is it normal ? I can only see the date of start scan. It's enough for me to see this result and notification through mail.

So If I understand your answer about INLINE behaviour, it is not possible to use scanner in post-reg so I need to change my project and I need to review guide but keep in mind about the PF config because it is not possible to choose pre or post reg scan inside PROFILE but only in compliance scanner definition. I think that this setup is a kind of limitation because all networks will have the same scanner behaviour.

Thanks again for your free support. Best Regards Enrico