tonarino / innernet

A private network system that uses WireGuard under the hood.
https://blog.tonari.no/introducing-innernet
MIT License
5.01k stars 186 forks source link

Allow for site to site configuration #22

Open schemen opened 3 years ago

schemen commented 3 years ago

As title says.

I would love to create a site to site network if I want to attach an entire site to the created network. The use case would for example be if you have a network where you have devices that cannot run wireguard (like a old NAS) and you still want resources from it available. Simply create a route on a router and/or server to the VPN gateway and you get the site connected.

This would also mean that the peers that are allowed to access the sites network should get that routing information.

Eeems commented 3 years ago

I would also like this. I was asking about it in the matrix/discord chat room with no response so far.

vikanezrimaya commented 3 years ago

This would probably require recording subnet route information together with peers, and setting up routes to respective peers' subnets whenever the interface is brought up or more peers are added.

mcginty commented 3 years ago

Related to https://github.com/tonarino/innernet/issues/30 since it's a similar functionality request.

I think this would be a great feature, but the exact way to implement it doesn't yet feel 100% clear. Will need a bit more implementation discussion.

kbknapp commented 3 years ago

We have this exact use case at work. A(n extremely) simplified version of what have/need is:

Diagram below:

innernet_issue_22

This is super simple to accomplish if all of the "extenral clients" can be placed on a private LAN that happens to fall somewhere in within the innernet-server's /16 subnet. All that has to be done is add an allowed-ips statement to the services innernet interface for the whole subnet of "external clients" (and the gateway must have IP forwarding enabled).

Enabling IP forwarding is outside the perview of innernet, I wouldn't suggest that to be taken up here. However, a flag/command to add a subnet to a peer is totally possible (and probably not too difficult to add either).

The workflow I'm imagining would look something like this. First the boilerplate to just get the diagram above up and running (without the newly proposed site-2-site):

# On "innernet-server"
innnernet-server new
  create "foo" network
# At an "admin" location
innernet add-cidr foo
  create a CIDR "gw" for all the gateway machines
innernet add-cidr foo
  create a CIDR "svc" for all the service machines
innernet add-peer foo
  create a peer for the service
innernet add-peer foo
  create a peer for the gateway
innernet add-association foo
  associate gw <==> svc

# On the gateway machine
 - [..install and start innernet on the foo network..]
 - [..enable IP forwarding..]

# On the service machine
 - [..install and start innernet on the foo network..]

I see two options to enable this site-2-site idea, both have pros and cons:

I'd suggest a minimum viable product that is most flexible and could be built-upon later. For me, that's only dealing with peers (i.e. not worrying about all peers in a CIDR), and selecting from existing CIDRs.

i.e. The final steps to get it working would look like:

### These are the new steps from this feature request ###

# Create a CIDR to hold the phantom "external clients"
innernet add-cidr foo
  create a CIDR "ext" for the external clients (no actual peers will be created)

# A new CLI option to enable Site-2-Site
innernet add-peer-ips foo
  Peer? (select Peer) we pick "gateway" peer
  CIDR? (select CIDR) we pick "ext" CIDR
  Confirm?

Now assuming the service machine does a fetch we should see the following:

# 
wg show foo

interface: foo
  public key: ...
  private key: (hidden)
  listening port: 38268

peer: ...                                           <---- Gateway
  endpoint: ...:57940
  allowed ips: 10.42.100.1/32, 10.42.200.0/24       <----- Notice the /24 "ext" CIDR
  latest handshake: 1 minute, 16 seconds ago
  transfer: 3.58 KiB received, 5.36 KiB sent
  persistent keepalive: every 25 seconds

peer: ...                                            <--- Innernet Server
  endpoint: ...:51820
  allowed ips: 10.42.0.1/32
  latest handshake: 1 minute, 25 seconds ago
  transfer: 97.11 KiB received, 43.94 KiB sent
  persistent keepalive: every 25 seconds

This could be represented in the SQLite DB as a new table of peer-ips similar to associations, which just lists cidr primary keys to peer primary keys.

For completeness sake, the wg command to do this manually on the "service" peer is:

$ wg set foo peer '...' allowed-ips 10.42.100.1/32,10.42.200.0/24

Where the 100.1/32 CIDR is the "gateway" and 200.0/24 is the "ext" CIDR.

wezzle commented 2 years ago

I would like to express my interest of managing allowed-ips through innernet as well. We currently have a custom Wireguard configuration in place to access documentation via an IP whitelisted server. We do this by adding the destination IP to the allowed-ips of the peer configuration of the whitelisted server. That server is configured to do ip forwarding.

Right now using innernet I can only get it working by temporarily disabling the innernet service (so peer configuration isn't overwritten) and then running the $ wg set foo peer '...' allowed-ips 10.42.100.1/32,10.42.200.0/24 command as @kbknapp mentioned in combination with adding a custom route ip -4 route add <ip> dev <interface>. But this is less than ideal.

As I have no idea on where to get started on a PR for this I'm hoping someone with more knowledge of the project can get us started.

nferch commented 9 months ago

I think I'm in a similar spot to @wezzle where I need some bespoke Wireguard configuration/local system routes to coexist with the Innernet-managed ones.

I can understand the complexity in implementing such a feature, but am curious of if there's simpler-to-implement options that would enable smoother workarounds, like lifecycle hook scripts that could be invoked to apply the manual configuration needed for the peers with subnets behind them. I'd be fine treating the site-to-site peers as snowflakes with static IPs if they could still communicate with the automatically-managed ones.

Curious if anyone else had figured out workarounds for this gap in functionality which is blocking me from migrating from Tinc. This would be preferable to having to use something like Headscale or Netmaker that has a lot of weight and complexity I don't really need.