Open Nemo157 opened 1 year ago
It also seems like mullvad publishes a script to connect to mullvad servers. The most interesting thing is that you basically link public keys with your account (which is why I think that there is a preconfiguration step to register devices in their announcement).
It seems like the simplest implementation could be to create an external script that calls registerNodeCmd
on mullvad endpoints (marking them as WireguardOnly), and then calling the mullvad api with each of the node's public keys you want to link.
I think the RegisterMachine, machine config, and node conversion would need to be changed.
(I also am really not an expert in this, so please take it with a grain of salt)
Edit: what needs to be changed
I think this was intended by the issue author, but to reiterate it seems more useful to me to allow any generic WireGuard-only peer as an exit node, not just Mullvad servers. That way headscale doesn't have to be tied to one VPN provider like the Tailscale coordination server currently is.
It might be possible to support this for any WireGuard server peer by accepting peer config files like those generated by Mullvad's WireGuard config file generator like described in this guide, or by just asking the user to provide the generated fields we need at the CLI when adding the peer.
I don't think it's possible to import a generated config file, because that contains a randomized private key. The provider needs to support uploading the existing public key from the devices that will connect. That doesn't seem possible through Mullvad's website, it wants the private key specified so it can embed it in the generated config files, but it is possible through the API. I haven't used other wireguard based vpn services so I'm not sure if being able to upload existing keys is common.
Why would it be required to upload an existing public key?
@WoodenMaxim From what I can tell, each Tailscale node only has a single private/public key pair that is generated when they are created, and then it uses that pair with every other node. So, when adding a non-Tailscale WireGuard endpoint like a Mullvad server, that other end needs to know (all of the) existing Tailscale nodes' public keys that are going to connect to it.
How does adding a Wireguard-only exit node get the public key of the nodes intending to use it into that node's configuration? If there was an easy solution for this we would not need Tailscale...
How does mullvad do it?
How does mullvad do it?
No. "How does Tailscale do it?" Obviously by being a kind of "reseller" and having an interface to provision the mullvad IAM that way. The more interesting question is how the tailscale client is selecting the "exit node" it wants to use.
I was already wondering about this in other settings. If there are multiple possible exit nodes for a destination or multiple Internet gateways how is the most appropriate node selected and how can I influence the choice?
https://tailscale.com/kb/1103/exit-nodes/?tab=linux
tailscale up --exit-node=<exit-node-ip>
With that resolved, we still need to figure out how to get the wireguard public keys of the tailscale nodes with permission to access the exit node into the wireguard-only peer, and vice-versa.
Maybe it's as simple as "run a command to dump the full list of keys in a form that the wireguard-only peer can consume, and expect the admin to put that configuration onto the node (and keep it up to date) manually". That may not be very palatable, but the alternative is writing software to sync keys automatically in which case why not just run a full tailscale node?
Maybe the best solution would be to just add some example docs showing how you can execute this pattern with a regular tailscale node...
Mullvad themselves provide a script 1 to generate vanilla wg configs instead of mullvad's native client.
The client public key is communicated over mullvad's (sadly undocumented) API.
On Thu, Sep 28, 2023 at 4:05 AM Joe Taber @.***> wrote:
How does mullvad do it?
— Reply to this email directly, view it on GitHub https://github.com/juanfont/headscale/issues/1545#issuecomment-1737854626, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGMXXMWCDPE3INS5NYB5JVTX4RTGHANCNFSM6AAAAAA4PIQA6M . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Very interesting, thanks for sharing sosnik.
https://github.com/mullvad/mullvad-wg.sh/blob/main/mullvad-wg.sh#L59
curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")"
Roughly the script:
/etc/wireguard/$CODE.conf
with aforementioned connection details so you can connect by running wg-quick up $CODE
I also found this gist that loosely documents uploading and revoking keys which may also be useful for mullvad: https://gist.github.com/izzyleung/98bcc1c0ecf424c1896dac10a3a4a1f8
So mullvad relays are configured via simple https POSTs to their api (which great, I love that they use simple tech). This may be useful if we want to implement mullvad support into headscale.
That said, it doesn't help us much if we just have a random wireguard relay sitting on a vps and we want to add "Support for WireGuard only peers", as the title of this issue suggests. Honestly I'm not even sure how we would expect a plain WG exit node to work in theory: Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying; or just... you know, install tailscale on that node instead, tailscale was invented to solve that annoyance in the first place.
With that in mind, I think we should open a new issue / retitle this issue to "Support for mullvad exit nodes" (if desired), and add a recipe to the docs showing how to set up your own exit node by running a headscale node on a vps or something, because a "WireGuard only peer" is a non starter.
Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying
Just to clarify, you need to export public keys to configure the WG node (and import the WG nodes' public key into headscale).
I think there are still situations where this could be useful. One thought is that you want to connect devices to an organization managed WG node, where you don't have permission to install tailscale but you are able to provide your public keys to be configured on the node.
The other main reason I think to target just supporting "wireguard only peers" first is that they are the only thing that the tailscale protocol knows about. If they are supported by headscale then scripts can be written to configure them for whatever situation is needed, while if headscale instead only supports talking to the Mullvad API it blocks being able to configure for other situations. That doesn't mean headscale shouldn't support talking to Mullvad itself, but I think it should build on the general functionality of wireguard only peers.
I am with Nemo157 on this one. At a bare minimum, headscale should support exporting a vanilla wireguard config (peer public keys and endpoints) for use with other wireguard clients. Supporting only mullvad opens the door to "But why not Proton" / "Why not X" conversations. One other thing to consider is that commercial VPN providers will limit you to X number of concurrent connections (I think mullvad's limit is 5?). If someone's tailnet (headnet?) has more than 5 devices, we don't want to give mullvad more than 5 public keys and run up against such limits by accident.
On Thu, Sep 28, 2023 at 7:05 PM Nemo157 @.***> wrote:
Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying
Just to clarify, you need to export public keys to configure the WG node (and import the WG nodes' public key into headscale).
I think there are still situations where this could be useful. One thought is that you want to connect devices to an organization managed WG node, where you don't have permission to install tailscale but you are able to provide your public keys to be configured on the node.
The other main reason I think to target just supporting "wireguard only peers" first is that they are the only thing that the tailscale protocol knows about. If they are supported by headscale then scripts can be written to configure them for whatever situation is needed, while if headscale instead only supports talking to the Mullvad API it blocks being able to configure for other situations. That doesn't mean headscale shouldn't support talking to Mullvad itself, but I think it should build on the general functionality of wireguard only peers.
— Reply to this email directly, view it on GitHub https://github.com/juanfont/headscale/issues/1545#issuecomment-1738757384, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGMXXMXH5IN3WLT6HDXLFZDX4U4V3ANCNFSM6AAAAAA4PIQA6M . You are receiving this because you are subscribed to this thread.Message ID: @.***>
I strongly believe this would come with all the overhead involved in implementing a WireGuard management API, for which exist many examples.
* https://github.com/firezone/firezone * https://github.com/gravitl/netmaker * https://github.com/netbirdio/netbird * https://github.com/ngoduykhanh/wireguard-ui
Do we already know the API surface of the Tailscale coordination server, which needs to be mimicked by Headscale for supporting the client feature implemented in tailscale/tailscale#7821?
Practically speaking, it appears this use case will be much easier to achieve with a Tailscale network and a given WireGuard network residing in the same namespace, and routing being allowed between their subnets.
To continute to distinguish between (1) sole support for WireGuard peers and (2) a default route via an external WireGuard VPN:
::/0
route, and that is locally forwarded through the above subnet, given another ::/0
route would be inherited from there?Just to clarify, you need to export public keys
Thanks for the correction, please excuse my typo.
I'll relax my stance a bit here, it seems perfectly reasonable to allow headscale users to manually configure wireguard peers by exporting node public keys and importing remote endpoint public keys by some cli or api, and to expect VPN configuration scripts to be layered on top of this feature. Though, how it interacts with ACL and other tailscale features appears to present some non-trivial remaining challenges.
Wrt namespacing and routing, are routing "announcements" a thing in wg? The mullvad script explicitly sets config on the client to route ips over the interface with AllowedIPs = 0.0.0.0/0, ::/0
. From that I'd guess the admin would have to set the route manually.
Wrt namespacing and routing, are routing "announcements" a thing in wg? The mullvad script explicitly sets config on the client to route ips over the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the admin would have to set the route manually.
Not that I am aware of. wg-quick uses native methods (ip route) to define routes in the host, and no "announcements" per se are actually happening. But I don't think this is a problem:
On Fri, Sep 29, 2023 at 3:18 AM Joe Taber @.***> wrote:
Just to clarify, you need to export public keys
Thanks for the correction, please excuse my typo.
I'll relax my stance a bit here, it seems perfectly reasonable to allow headscale users to manually configure wireguard peers by exporting node public keys and importing remote endpoint public keys by some cli or api, and to expect VPN configuration scripts to be layered on top of this feature. Though, how it interacts with ACL and other tailscale features appears to present some non-trivial remaining challenges.
Wrt namespacing and routing, are routing "announcements" a thing in wg? The mullvad script explicitly sets config on the client to route ips over the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the admin would have to set the route manually.
— Reply to this email directly, view it on GitHub https://github.com/juanfont/headscale/issues/1545#issuecomment-1739725218, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGMXXMWSNVU4TWT2BVN27HLX4WWNPANCNFSM6AAAAAA4PIQA6M . You are receiving this because you are subscribed to this thread.Message ID: @.***>
This issue is stale because it has been open for 90 days with no activity.
Still relevant.
Still relevant.
But still without any idea about the implementation by those who want it. To summarize: Tailscale (and a few others) exist because there is no simple auto-configuration for Wireguard links in the basic protocol. You either tell us how to introduce the Tailnet to some arbitrary wireguard (exit-)node or we can just as well close this for good.
Tailscale already has the client-side feature for this, someone needs to investigate exactly how it is represented by the server and add it to the details provided by Headscale, there is no design work needed for the tailnet side of it. I'm pretty sure once the backend side of how to represent it is investigated the interface to configure that from the CLI will be relatively self-evident, so I'm not sure if there's any point in trying to design it externally. (I would have worked on this myself already, except I really dislike golang, maybe one day I'll eventually give up waiting for someone else and get over my aversion).
I think this feature would be very helpful in several scenarios:
if you can
I) import to headscale
a) a dataset of nodename, owner, publickey, wireguard ip address, external address ... for each wireguard only node
II) export
a) a list of datasets for the headscale nodes, which should be able to connect these nodes
b) a n*m matrix which headscale node should be able to connect to which "wireguard" node
III) send a webhook if reconfiguration ist needed
the deployment tools should be able to do the rest.
Company rules could require to use a specific deployment tool and automation process so headscale client may be no option for some systems. Or company rules do not allow headscale installation without a time consuming certification process for every new software version.
Some appliances do not have support for head/tailscale and dont allow installing third party software, but allow deloyment over ssh, api, ldap or whatever.
A second headscale server may import the exported list and use this for federation.
Of course all these scenarios dont fit to the general idea of headscale, but .... maybe there no other way ....
Does anyone have a mullvad account for testing? I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client.
If so, email me at github1545 [at] unixfox.eu
@unixfox I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client.
I think you need a Tailscale account with Mullvad add-on for that.
This issue is stale because it has been open for 90 days with no activity.
Not stale.
Hello, I went and bought a Mullvad subscription for Tailscale to investigate how it works.
The system is actually really simple: when you use a Mullvad exit node it looks exactly like a normal exit node. Here's what it looks like using a personal exit node vs Mullvad exit node.
IP HOSTNAME COUNTRY CITY STATUS
100.xxx.xxx.xxx personal.tailedxxxx.ts.net - - -
100.127.203.60 us-chi-wg-007-1.mullvad.ts.net USA Chicago, IL -
and then in the tailscaled.state file it sets the ExitNodeID in the "profile-xxxx" like any other exit node, eg Chicago 007 is "ExitNodeID": "n85Dw3BNhX11CNTRL",
When you go to mullvad.net they see your traffic as coming from the server corresponding to that hostname. Beyond that, it's opaque. No Mullvad wireguard configs are exposed, all of that happens within the Tailscale controlled exit node server.
It's a black box from there, although Tailscale provides clear documentation on what data they associate with accounts. Technically speaking, I don't think Mullvad has to do anything for this to work. They provide a CLI and readable API, for example the following is what happens when you login via cli
curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")")"
where $PRIVATE_KEY is defined as wg genkey
and stored in the wireguard config.
I obviously can't tell, but it seems all Tailscale would have to do is set up those exit nodes to chain the wireguard connections, the complex part would be key allocation/licensing.
TL;DR: Neither the Tailscale client nor orchestrator have anything to do with the Mullvad integration. They have specially configured exit nodes that likely do wireguard chaining.
Thanks for putting your money up to investigate, @thedustinmiller.
Still, I don't think that the tailscale control server has no role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on 100.127.203.60
? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones?
As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then:
iptables -A FORWARD -i tailscale0 -o wg0 -j ACCEPT
iptables -A FORWARD -i wg0 -o tailscale0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
And then you can switch out which wireguard config (VPN exit node) becomes wg0
in the above picture. Granted, this is more of a hacky single-user setup, but it might work.
The control server manages them as "wireguard only" nodes, and Disco and DERP is disabled on such nodes.
Here's a list of the exit nodes. The IPs appear consistent, so far at least.
I am working on exactly what you mentioned with the iptables, I subscribe to normal Mullvad as well and am trying that with the vanilla Wireguard configs they generate.
But yeah I was totally wrong, the search sefidel provided included a test describing wireguard only functionality, I think. I'm guessing this comment means the server has to explicitly define those peers.
// IsWireGuardOnly indicates that this is a non-Tailscale WireGuard peer, it
// is not expected to speak Disco or DERP, and it must have Endpoints in
// order to be reachable.
This is pretty far outside my usual work, so please let me know if I can provide any other info.
Thanks for putting your money up to investigate, @thedustinmiller.
Still, I don't think that the tailscale control server has no role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on
100.127.203.60
? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones?As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then:
iptables -A FORWARD -i tailscale0 -o wg0 -j ACCEPT iptables -A FORWARD -i wg0 -o tailscale0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
And then you can switch out which wireguard config (VPN exit node) becomes
wg0
in the above picture. Granted, this is more of a hacky single-user setup, but it might work.
I've been experimenting with essentially this idea in my homelab recently and I've had some success.
I started with having Tailscale and Wireguard both running inside a Ubuntu VM (well actually a cheap rented virtual private server for the 1Gbps connection) which didn't work until I finally found the correct pre-up/down rules and got it going. Then I transferred the concept into a Debian docker image successfully. The only noticeable difference is slightly slower network speeds, 200Mbps inside docker vs 350Mbps on the VM host (i.e. download speed from the perspective of another node on the tailnet using the exit-node), presumably due to docker overhead.
I haven't yet put it together into a nice neat repo that can be shared widely. However I'm happy to share my configs here for anyone that's interested with a big health warning that I'm a noob and don't know what I'm doing so copy at your peril. Hope it helps!
This is the wireguard config file I've created. It's activated using wg-quick after tailscale has been installed and started successfully on the same system:
[Interface]
Address = [REDACTED]
PrivateKey = [REDACTED]
Table = off # stops wg-quick from auto-generating routing tables
MTU = 1380 # seems to give better network speed when running inside a docker container
PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name
PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quick
PostUp = ip -4 rule add not fwmark 51820 table 51820 pref 32765 # important - I believe this sets the preference of the wireguard tunnel to be higher (i.e. lower priority) than tailscale, allowing both to co-exist
PostUp = ip -4 rule add table main suppress_prefixlength 0 # copied from wg-quick
PostUp = sysctl -q net.ipv4.conf.all.src_valid_mark=1 # copied from wg-quick
PreDown = ip -4 rule del table 51820 # copied from wg-quick
PreDown = ip -4 rule del table main suppress_prefixlength 0 # copied from wg-quick
[Peer]
PublicKey = [REDACTED]
AllowedIPs = 0.0.0.0/0
Endpoint = [REDACTED]:51820
PersistentKeepalive = 25
This is the dockerfile I use to create my docker image:
# Base image
FROM debian:bullseye-slim
# Install necessary packages
RUN apt-get update && apt-get install -y \
curl \
iproute2 \
iptables \
wireguard-tools \
bash \
procps \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install additional packages
RUN curl -fsSL https://tailscale.com/install.sh | sh
# Copy and set the entrypoint script into the container if active
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
This is my entrypoint.sh file:
#!/bin/bash
# Check if necessary environment variables are set
: "${TS_EXTRA_ARGS:?Environment variable TS_EXTRA_ARGS must be set}"
: "${WG_CONFIG:?Environment variable WG_CONFIG must be set}"
# Start Tailscale in the background
echo "Starting tailscaled"
tailscaled &
TAILSCALE_PID=$!
# Function to check if tailscaled is running
check_tailscaled() {
pgrep tailscaled > /dev/null
}
# Wait for tailscaled to start up
echo "Waiting for tailscaled to initialize..."
for i in {1..10}; do
if check_tailscaled; then
echo "tailscaled is running"
break
fi
sleep 2
done
# Check again if tailscaled is running after waiting
if ! check_tailscaled; then
echo "tailscaled failed to start. Exiting."
tail -f /dev/null # Keeps the container running for debugging
exit 1
fi
# Authenticate with Tailscale
echo "Running Tailscale up"
tailscale up $TS_EXTRA_ARGS
# Check if Tailscale is connected
if ! tailscale status | grep -q "Tailscale is stopped"; then
echo "Tailscale is connected"
# Ensure WireGuard is up and running
if [ -f $WG_CONFIG ]; then
echo "Starting WireGuard"
wg-quick up "$WG_CONFIG"
else
echo "WireGuard configuration file not found!"
fi
tail -f /dev/null # Keeps the container running for debugging
else
echo "Tailscale failed to connect. Exiting."
tailscale status # Optionally: Print Tailscale logs for further debugging
tail -f /dev/null # Keeps the container running for debugging
fi
And finally my docker-compose to bring the image up:
ts_wg_01:
image: ts_wg_01
container_name: ts_wg_01
privileged: True
cap_add:
- net_admin
- sys_module
volumes:
- /mnt/docker/appdata/ts_wg/state:/var/lib/tailscale
- /dev/net/tun:/dev/net/tun
- /etc/wireguard/ts_wg_01.conf:/etc/wireguard/wg0.conf
environment:
- TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true
- WG_CONFIG=/etc/wireguard/wg0.conf
Thank you @trinity-geology-unstable for your efforts! I tried your configs and setup, and found few flaws in it (in case someone else want to try it).
I think there was some naming inconsistencies in the suggested solution.
PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quick
I think it should be:
PostUp = wg set ts_wg_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev ts_wg_01 table 51820 # copied from wg-quick
The reason is mentioned in the comment:
"Interface name should match config file name"
In the Dockerfile I needed to add also openresolv like this:
curl \
iproute2 \
iptables \
wireguard-tools \
bash \
procps \
openresolv \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
Also, at least in my case, I needed to modify docker-compose.yml and add an address of dns-server into this line: - TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true --dns=x.x.x.x
. So by adding --dns=x.x.x.x
made my endpoint finally to get connection to the internet.
Why
Tailscale just announced their support for integrated Mullvad exit nodes. Being able to configure a similar setup via Headscale and an independent Mullvad account (or other wireguard VPN provider) would be useful for those of us without a Tailscale account.
Description
I haven't looked deeply into the details, but it's my understanding that this is implemented via a "WireGuard only peer" feature, and then support in the Tailscale coordination server to synchronize these peers with Mullvad. I assume it would be possible for Headscale to allow manually configuring these peer types.