Closed wrouesnel closed 9 months ago
So your pairdrop instance is only running in your local network? Normally all devices "on this network" share the same same public IP address which is how they are grouped together and why they are automatically shown to each other.
For cases like yours, I have implemented to also map all private IP addresses to the same IP to show all devices on the same network as the PairDrop host to each other.
For Debugging it would help if you could start PairDrop with DEBUG=true npm start
or docker run -e DEBUG_MODE="true"
(as explained here) and then open PairDrop on either of the networks. You should get some information about what IP addresses connect to PairDrop and how they are handled internally.
I've sorted it out: the answer is because there's no connectivity between the internal
and guest
networks, although everyone joins the same room on the server, they're not visible because WebRTC discovery fails and the Websocket fallback is necessary. This is by design and what I want - my concept was that the server would mediate this
I think I've discovered a bug though: setting WS_FALLBACK=true
didn't change anything, since the parsing for whether to serve the fallback is actually a command line option here: https://github.com/schlagmichdoch/PairDrop/blob/46f33f894bee599f2874a2e420f4f9a7080be667/index.js#L87
It doesn't look like the app tracks the env var, and I'm not sure the upstream docker container does either?
I fixed this in the self-contained docker container I built to serve this by picking up WS_FALLBACK
and executing the app as node index.js --include-ws-fallback
- which immediately got everything working and visible the way I want.
So in conclusion the answer for this use-case is "you need websocket fallback if direct network connectivity between clients is not allowed by design" - which in my use case - "visitors to my household can share files amongst each other and with regular occupants from the guest wifi network" - is the case.
It doesn't look like the app tracks the env var, and I'm not sure the upstream docker container does either?
There are different docker images available:
docker build
cmdTo start 1 and 2 you specify the npm run cmd at the end, e.g. docker run ghcr.io/schlagmichdoch/pairdrop npm run start:prod
so all env vars and flags can be find here: https://github.com/schlagmichdoch/PairDrop/blob/master/docs/host-your-own.md#deployment-with-node
Linuxserver.io wanted to conform the docker in a way that docker run lscr.io/linuxserver/pairdrop
directly starts the service without having to specify the npm cmd so they mapped the npm cmd and its flags to env vars. So for that docker image the env var WS_FALLBACK=true
is available: https://github.com/schlagmichdoch/PairDrop/blob/master/docs/host-your-own.md#docker-image-from-docker-hub
It's a little confusing, I know. Maybe we should ditch all flags and only work with env vars, which would conform docker image 1 and 2 to 3.
which immediately got everything working and visible the way I want.
There is another stupid bug though that I thought to be fixed already: https://github.com/schlagmichdoch/PairDrop/blob/46f33f894bee599f2874a2e420f4f9a7080be667/public_included_ws_fallback/scripts/network.js#L2 I will fix it tomorrow and push it to a new version...
I guess that was for debugging purposes and should obviously be commented in again otherwise all devices use the fallback and not only the devices that are not capable of using WebRTC.
That being said, the fallback is currently implemented in a way, that it is used when this device or the device you try to connect to does not support WebRTC: https://github.com/schlagmichdoch/PairDrop/blob/46f33f894bee599f2874a2e420f4f9a7080be667/public_included_ws_fallback/scripts/network.js#L994-L998
So normally your devices would not use the fallback as they do support WebRTC but fail.
Probably, it would make sense to implement it in a real fallback sense, so that when the RTCConnection fails ultimately, a connection via WebSockets is used instead. What do you think?
they're not visible because WebRTC discovery fails and the Websocket fallback is necessary.
I believe it should be possible to use a local TURN server on one of your networks to prevent the RTCConnection from failing. This way you would not needthe Websocket fallback.
Also, beware that at the moment the Websocket fallback is not encrypted at all. I'll try to add encryption to it in the upcoming weeks.
I believe it should be possible to use a local TURN server on one of your networks to prevent the RTCConnection from failing. This way you would not needthe Websocket fallback.
So I had a look at this option, and it's working now but I'm not 100% sure why although it seems to be mostly to do with however coturn treats it's settings.
For the benefit of documentation this is what worked (real IPs elided):
I have:
The internal network is 192.168.10.0/24
, the guest network is 192.168.20.0/24
.
I opened the firewall on port 443 to my pairdrop server from guest -> internal. So guest can hit 192.168.10.50
directly.
I opened everything to coturn. So guest can just send traffic to coturn on 192.168.10.55
.
For setting up coturn I found that I had to set the listening-ip
, external-ip
and relay-ip
explicitly in order to get things to work. Briefly, my turnserver.conf that worked looks like this:
realm=my.home.com
server-name=turn.my.home.com
listening-ip=192.168.10.55
external-ip=192.168.10.55
relay-ip=192.168.10.55
listening-port=3478
min-port=10000
max-port=20000
fingerprint
log-file=stdout
verbose
user=pairdrop:somepasswordthatsnothis
lt-cred-mech
cert=/etc/lego/turn.my.home.com.crt
pkey=/etc/lego/turn.my.home.com.pem
no-tlsv1
no-tlsv1_1
tls-listening-port=443
The key setting seemed to be setting listening-ip to the internal IP address - it started working after that.
This seems to get me to a more optimum location experience wise: I now have a (private) magic host on my guest network which will autodiscover everyone connected to a pairdrop session that's running.
Probably, it would make sense to implement it in a real fallback sense, so that when the RTCConnection fails ultimately, a connection via WebSockets is used instead. What do you think?
This is what I assumed was happening (I've only been testing amongst single devices so far), though I suppose what actually happened was I tripped over the bug you noticed and for my purposes it started working.
Awesome that you got it working and thanks for the documentation! 👌
cert=/etc/lego/turn.my.home.com.crt pkey=/etc/lego/turn.my.home.com.pem no-tlsv1 no-tlsv1_1 tls-listening-port=443
Do you use self signed certificates then to host PairDrop and for TURNS via coturn?
I have:
- pairdrop running on 192.168.10.50
- coturn running on 192.168.10.55
Have you also tried deploying coturn and PairDrop on the same IP address or is it a requirement to run on two separate devices for TURN to run properly on a local network?
I’m asking as I’m not able to make coturn and PairDrop work on the same device in a local network.
I plan to release an app with a functionality to host a PairDrop instance on the local network directly from the app. This would enable transfers in situations without internet connection (think festival in the middle of nowhere). Clients connect to the hosts hotspot and visit the hosts PairDrop website (their local IP + port - preferably via QR code). This would enable file sharing without the need to install any new software.
In this setup most connections work out of the box when no ICEcandidates are provided. For some usecases a TURN server is needed though. If this would work with coturn and PairDrop on the same device, I would use the existing WebRTC connections. Otherwise I would need to enable the WebRTC fallback for specific usecases.
Do you have an idea whether I could get that to work?
I have two wifi networks in my home - "internal" and "guest" - which are, obviously, mostly separate. I've setup pairdrop and poked it through from the internal wifi to my guest network - i.e. the IP address is visible and accessible on both networks.
What I want to have happen is for devices on both networks to automatically appear as "discovered on this network".
The two networks have different subnets - i.e. my internal is
192.168.1.0/24
and my guest is192.168.2.0/24
.I suppose this might be similar to a default public room or something. Is there some way to do this already?