Open dberardo-com opened 2 years ago
what should be used as a default var then?
can you provide your settings, and the error message you are getting?
wow, that was fast.
so i am using this config:
- hosts: hostgroup
become: true
become_user: root
vars:
vpn_opportunistic: true
vpn_connections:
- name: something
auto: start
hosts:
myhost1:
hostname: xxx.xxx.xxx.xxx
myhost2:
hostname: yyy.yyy.yyy.yyy
tasks:
- include_role:
name: vpn
tasks are under a roles/vpn/ local folder
and i get get an error at this stage of the mesh_conf.yml:
- name: Set policies fact
when: conn.policies is defined
set_fact:
policies: "{{ conn.policies | rejectattr('cidr', 'match', '^default$') | list }}"
which basically states that the conn.policies attribute is undefined (i can provide the exact log tomorrow if needed)
I am running ansible against Ubuntu 20.04 hosts using a Windows controller machine running Cygwin
@ueno looks like a bug - the policies line should not cause an error if policies is not defined e.g. maybe something like this:
policies: "{{ conn.policies | rejectattr('cidr', 'match', '^default$') | list if conn.policies is defined else [] }}"
and then on line 28, something similar:
{% set pol = conn.policies | d([]) | selectattr('cidr', 'match', '^default$') | map(attribute='policy') | join(',') %}
or something like that. We also need to add a test for this case - it looks like the test for mesh ensures that conn.policies
is always defined - but the docs say it doesn't have to be defined - so we should have a test for that case.
@dberardo-com in the meantime, I think you can define the policies like this
vpn_connections:
- name: something
auto: start
hosts:
myhost1:
hostname: xxx.xxx.xxx.xxx
myhost2:
hostname: yyy.yyy.yyy.yyy
policies:
- policy: private-or-clear
cidr: default
not sure - never tried this, just reading the docs
@richm thank you for looking into it! let me try to come up with a PR.
thanks for the prompt response.
I have tried to use the private-or-clear policy as a workaround, like @richm suggested, that was also my initial guess. but ipsec is not able to start up and i get this error:
ipsec show
File "/usr/lib/ipsec/show", line 52
print "Need to find matching IPsec policy for %s/32 <=> %s/32" % (source, dest)
^
SyntaxError: invalid syntax
Note: the IP that gets written in the ipsec.conf file ends with "/32"
Is this problem related or should i open a new issue ? Because i also cannot see any new virtual network interface being generated on the machine, and the ansible scripts have already run till the end (only the final ping fails), so i wonder if solving the policy issue will fix all the rest or if there is something worse going on
Can you try an explicit cidr
value as in the mesh test? https://github.com/linux-system-roles/vpn/blob/master/tests/tests_mesh_cert.yml
also - I don't know if the role is supported against an Ubuntu managed host - we only test with Red Hat/CentOS/Fedora, and the role was developed with those platforms in mind - the error from ipsec show
that you have posted would seem to suggest that the role is writing the configuration in a format that might not be supported by libreswan on ubuntu
i have switched to 2 centos7 machines now and i have removed the policies block from the inventory and it seems to have run properly.
if i run ipsec status
i receive a long output and no error, but my questions now are:
Maybe i am missing some information, but i have read through the whole README file
here the status log:
status log of ipsec verify:
Version check and ipsec on-path [OK]
Libreswan 3.25 (netkey) on 3.10.0-1160.el7.x86_64
Checking for IPsec support in kernel [OK]
NETKEY: Testing XFRM related proc values
ICMP default/send_redirects [NOT DISABLED]
Disable /proc/sys/net/ipv4/conf/*/send_redirects or NETKEY will act on or cause sending of bogus ICMP redirects!
ICMP default/accept_redirects [NOT DISABLED]
Disable /proc/sys/net/ipv4/conf/*/accept_redirects or NETKEY will act on or cause sending of bogus ICMP redirects!
XFRM larval drop [OK]
Pluto ipsec.conf syntax [OK]
Two or more interfaces found, checking IP forwarding [FAILED]
Checking rp_filter [ENABLED]
/proc/sys/net/ipv4/conf/all/rp_filter [ENABLED]
/proc/sys/net/ipv4/conf/default/rp_filter [ENABLED]
/proc/sys/net/ipv4/conf/eth0/rp_filter [ENABLED]
/proc/sys/net/ipv4/conf/ip_vti0/rp_filter [ENABLED]
rp_filter is not fully aware of IPsec and should be disabled
Checking that pluto is running [OK]
Pluto listening for IKE on udp 500 [OK]
Pluto listening for IKE/NAT-T on udp 4500 [OK]
Pluto ipsec.secret syntax [OK]
Checking 'ip' command [OK]
Checking 'iptables' command [OK]
Checking 'prelink' command does not interfere with FIPS [OK]
Checking for obsolete ipsec.conf options [OK]
I use ipsec whack --traffic
- you can see from the fields inBytes
and outBytes
if encrypted traffic is flowing.
i have switched to 2 centos7 machines now and i have removed the policies block from the inventory and it seems to have run properly.
if i run
ipsec status
i receive a long output and no error, but my questions now are:* is there a simple way to manually test if the ipsec VPN is actually working fine ? e.g. setting up a firewall to hide ports to public traffic and only letting the VPN connected machines in via UDP port 500
In addition to the information listed here - https://github.com/linux-system-roles/vpn#verifying-a-successful-startup - I have used ipsec whack --traffic
to see that the inBytes
and outBytes
values are increasing over time (assuming there is some sort of network traffic between the machines e.g. a ping
should do it).
Maybe @ueno could suggest some other method of verifying the connection (which might be good to add to https://github.com/linux-system-roles/vpn#verifying-a-successful-startup)
* what are the private IP addresses that are associated with the 2 nodes in the VPN cluster? are they static? is there a way to set them to a specific value in the ansible script?
The vpn role uses whatever you use in your ansible inventory and/or vars you pass into the role as the hostnames, and whatever IP addresses resolve according to your DNS, by default. https://github.com/linux-system-roles/vpn#vpn-system-role "Basic Usage" - basically, use the hosts.$name_of_ansible_host.hostname
field as a different hostname/IP address than the one you are using with ansible. See https://github.com/linux-system-roles/vpn#examples
The VPN role (and the underlying libreswan) do not create IP addresses.
* Is the ansible script creating a mesh of VPN connected nodes?
It can - see https://github.com/linux-system-roles/vpn#opportunistic-mesh-vpn-configuration
Note that there are two different types of "mesh", depending on how you define the term.
A "mesh" can mean that you explicitly define, in your ansible inventory, all of the hosts and the relationships between them, by specifying each tunnel with each pair of hosts. This would be what we call a "host-to-host" mesh in an "N-to-N" configuration. Something like https://github.com/linux-system-roles/vpn#host-to-host-multiple-vpn-tunnels-with-multiple-nics
A "mesh" can also mean that you specify that all hosts matching a CIDR will use VPN between themselves. In this case, you do not have to explicitly specify each pair of hosts which are connected, but you still have to specify all of the hosts on which you want to enable and configure vpn.
and shouldn't there be a virtual netork interface per each node pair on every machine? this way the cluster can work also when some node fails (high availability)
Not sure I understand the question.
Maybe i am missing some information, but i have read through the whole README file
and shouldn't there be a virtual netork interface per each node pair on every machine? this way the cluster can work also when some node fails (high availability)
Not sure I understand the question.
Hi, me and @dberardo-com trying to setup ipse tunnel between two server hosted on contabo and we have just one ethernet interface with public IP address. Servers don't have a second ethernet interface with private ip address so i think we need to setup vti interface in order to get it working, right?
Servers don't have a second ethernet interface with private ip address
Sometimes they do, but I'm not sure how this is related to the problem.
so i think we need to setup vti interface in order to get it working, right?
Can't you just set up a VPN tunnel between the two external, public IP addresses? That is the typical use case for setting up a VPN between two machines connected by the public internet. If not, then I guess I don't understand the issue.
Usually IpSec need private subnet on phase2. So, i think that public ip address is used just for phase1. is ti right? How can setup a tunnel without phase 2 mandatory information? that's why i thinked about vti interface and routed ipsec connection
Usually IpSec need private subnet on phase2. So, i think that public ip address is used just for phase1. is ti right? How can setup a tunnel without phase 2 mandatory information? that's why i thinked about vti interface and routed ipsec connection
Not sure. Note that the vpn system role uses libreswan as its underlying implementation. so if there is a way to do what you want according to the libreswan docs https://libreswan.org/wiki/Main_Page we can probably figure out how to do that with the vpn role
you might find this interesting - https://www.redhat.com/en/blog/automating-host-host-vpn-tunnels-rhel-system-roles
Hi @richm we solved and manually setted up routed ipsec tunnel with vti interface. @dberardo-com is working in order to configure tunnel using ansible
Hi @richm we solved and manually setted up routed ipsec tunnel with vti interface. @dberardo-com is working in order to configure tunnel using ansible
Can you share what you did? This sounds like something that will be useful for others setting up VPNs.
not defining this variable results the mesh_conf script to break