ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
16.05k stars 3k forks source link

Websocket but secure: a short story, or how to get instantly disconnected from go-ipfs #8205

Closed fusetim closed 3 years ago

fusetim commented 3 years ago

Version information:

Using this docker image: ghcr.io/linuxserver/ipfs:version-v2.12.3

go-ipfs version: 0.8.0-7dacacac5c
Repo version: 11
System version: arm64/linux
Golang version: go1.16.5

Description:

I am trying to create a wss endpoint for my go-ipfs node using the Traefik/v2.3.1 reverse-proxy. The IPFS direct endpoint works fine on port 4002, I can connect to it with another node, the connection stays open and I get /multistream/1.0.0 from the node very quickly.

However, this is not the case when I connect through the reverse proxy, whether with the TLS overlay or not. The connection is established before being closed immediately with Disconnected (code: 1000, reason: "closed").

Why I come to ask my problem on this github instead of Traefik's is quite simple: with exactly the same Traefik configuration and just by changing the backend to an echo server, I can perfectly connect to this server through the proxy and even send/receive binary messages.

Otherwise, I am well aware that it is advisable to use nginx in this case, but as part of my existing Kubernetes cluster, I am better off continuing to use the same Ingress controller, traefik, especially since it already manages my certificates.

Moreover, other than replacing nginx with Traefik I perfectly meet the requirements provided by this doc, docs/transports.md. At this very moment here are the points I am filling in:

A little more context on my network configuration:

Here are some of the tests I made using wscat, I also tried other tools like Postman:

$ wscat -c wss://ipfs.v6.fusetim.tk/
Connected (press CTRL+C to quit)
Disconnected (code: 1000, reason: "closed")
$ wscat -c ws://ipfs.v6.fusetim.tk/
Connected (press CTRL+C to quit)
Disconnected (code: 1000, reason: "closed")
$ wscat -c ws://192.168.1.202:4002/
Connected (press CTRL+C to quit)
< /multistream/1.0.0

Disconnected (code: 1000, reason: "closed")

Finally, I'd like to say that I'm not sure if it's not a bug of some kind, a lack of documentation or a negligence on my part that led me there, so I chose Bug Report in doubt. I would love some help anyway.

EDIT1: Also, IPFS_LOGGING=debug is no help in this case :/

fusetim commented 3 years ago

Here are the headers sent to the IPFS backend, maybe something is missing.

{
    "Connection":["Upgrade"],
    "Sec-Websocket-Extensions":["permessage-deflate; client_max_window_bits"],
    "Sec-Websocket-Key":["2tvqizh49UwJF5/t6kc6CA=="],
    "Sec-Websocket-Version":["13"],
    "Upgrade":["websocket"],
    "X-Forwarded-Host":["ipfs.v6.fusetim.tk"],
    "X-Forwarded-Port":["443"],
    "X-Forwarded-Proto":["wss"],
    "X-Forwarded-Server":["traefik-546bc54b56-zv7g6"],
    "X-Real-Ip":["2a01:reda::cted:20d5"]
}

It does not work either using "X-Forwarded-Proto":["https"],

fusetim commented 3 years ago

Well using the same Nginx config as at IPFS Docs > How to create simple chat app # Nginx SSL and nginx/1.21.0, it didn't work either.

The problem still occurred when upgrading to go-ipfs/0.9.0/4743303840. I must be really unlucky... :cry:

fusetim commented 3 years ago

Well, the nginx config works... but not in my Kubernetes pod for some reason... It seems everything work correctly except when my nginx proxy tries to connect to the go-ipfs pod using a Pod-to-Pod or a Pod-to-Service connection. When it does so, go-ipfs closes the connection without any wait of any kind.

fusetim commented 3 years ago

Seems my websockets get rejected by a Connection Gater, according to the IPFS logs in DEBUG mode, the only line I get came from there: https://github.com/libp2p/go-libp2p-transport-upgrader/blob/3acbc8bf38f41898d5ae174bc8d2abcdac8b6f6f/listener.go#L89. So my first question was "what is a ConnectionGater" and I found that too there: https://github.com/libp2p/go-libp2p-core/blob/525a0b13017263bde889a3295fa2e4212d7af8c5/connmgr/gater.go

Now, I am struggling to find where this connection gater lives, it didn't seem to originate from neither the TCP transport neither the Websocket transport. So is this ConnectionGater an IPFS thing or a libp2p default?

fusetim commented 3 years ago

Okay, not quite found where the ConnectionGater was created in go-ipfs, but I know now it is created using the config. In particular the Swarm.AddrFilters field, and guess who made the decision a long time ago to ban 10.0.0.0/8 addresses to avoid cluster floods. Anyway problem solved, I was my own problem. However, a little more documentation and/or information when closing connections by the ConnectionGater would be appreciated. I'll put this back in view: https://github.com/libp2p/go-libp2p-core/blob/master/connmgr/gater.go#L45