Closed chrisalys closed 5 months ago
I tried with a swarmkey as to make it more private...
ipfs swarm addrs starts like this:
12D3KooWAbVphZ5ZJQgTqgJ8WtRVDBLKVryiP2DQDYSb4oHMSshF (3)
/ip4/10.147.2.4/tcp/4001
/ip4/127.0.0.1/tcp/4001
/ip6/::1/tcp/4001
12D3KooWCVHjQfVnBbjS5aDZzNjQBjn8oJzDSXY8AiApABcr3hqt (1)
/ip4/10.147.2.6/tcp/4001
good..
I then run: `bacalhau serve --node-type=requester,compute --peer=none --ipfs-connect=/ip4/127.0.0.1/tcp/5001 --private-internal-ipfs=false
I still get a few added, for about a minute or so (as below), then they disappear leaving just local and my one other private test node (as above), I guess this is because the swarmkey makes the swarming give up.
But of course, I don't want to advertise outside at all...
Q: Can we stop bacalhau adding any peers to the local private ipfs network???
12D3KooWAQpZzf3qiNxpwizXeArGjft98ZBoMNgVNNpoWtKAvtYH (2)
/ip4/35.245.161.250/tcp/4001
/ip4/35.245.161.250/udp/4001/quic
12D3KooWAbVphZ5ZJQgTqgJ8WtRVDBLKVryiP2DQDYSb4oHMSshF (3)
/ip4/10.147.2.4/tcp/4001
/ip4/127.0.0.1/tcp/4001
/ip6/::1/tcp/4001
12D3KooWBCBZnXnNbjxqqxu2oygPdLGseEbfMbFhrkDTRjUNnZYf (2)
/ip4/34.145.201.224/tcp/4001
/ip4/34.145.201.224/udp/4001/quic
12D3KooWCVHjQfVnBbjS5aDZzNjQBjn8oJzDSXY8AiApABcr3hqt (1)
/ip4/10.147.2.6/tcp/4001
12D3KooWH3rxmhLUrpzg81KAwUuXXuqeGt4qyWRniunb5ipjemFF (2)
/ip4/35.245.215.155/tcp/4001
/ip4/35.245.215.155/udp/4001/quic
12D3KooWJM8j97yoDTb7B9xV1WpBXakT4Zof3aMgFuSQQH56rCXa (2)
/ip4/35.245.41.51/tcp/4001
/ip4/35.245.41.51/udp/4001/quic
12D3KooWLfFBjDo8dFe1Q4kSm8inKjPeHzmLBkQ1QAjTHocAUazK (2)
/ip4/34.86.254.26/tcp/4001
/ip4/34.86.254.26/udp/4001/quic
The issue is that even when the user supplies an IPFS Connect flag, we still ask the IPFS node to connect to any swarm peers that are part of our config: https://github.com/bacalhau-project/bacalhau/blob/7e74d437ddcdd1d7304c671bff4e7f74a92b84f9/cmd/cli/serve/util.go#L168-L175 And by default, the Bacalhau production node peers will be in the config: https://github.com/bacalhau-project/bacalhau/blob/7e74d437ddcdd1d7304c671bff4e7f74a92b84f9/pkg/config/configenv/production.go#L57-L77
So for a start, a workaround to get the behaviour you want should be to explicitly set the swarm peers to an empty array in your config file, i.e.
Node:
IPFS:
SwarmAddresses: []
But then yes, we should think about whether this behaviour makes sense. Under what circumstances will a user have asked to connect to an existing IPFS node and then expect Bacalhau to modify the configuration of that node? That seems unlikely, so we can probably remove messing with any config in the --ipfs-connect
case.
Thanks for this. Good explanation.
Perhaps a few typical scenario based yaml configs come with the install; reducing decisions in code and will be generally less opaque, as people can see the configs laid out in the yaml... alter without compiling, etc.
thanks again
Perhaps a few typical scenario based yaml configs come with the install; reducing decisions in code and will be generally less opaque, as people can see the configs laid out in the yaml... alter without compiling, etc.
+1 to this idea, providing example config files with comments would be very helpful.
@michaelhoepler - please add this to docs that need writing
embedded ipfs node has been deprecated in favour of connecting to your own node https://github.com/bacalhau-project/bacalhau/pull/4061
I wish to run bacalhau as a private network along side a private ipfs swarm, but when I run bacalhau serve I suddenly get lots of unwanted peers.
Ipfs set up:
Then I install and run bacalhau serve
Lots of ipfs swarm peers turn up...
See complete proc below:
Watch for 10 mins.. no change above..
listening looks like:
ipfs swarm addrs are still the same a this point, then...
bacalhau serve --node-type=requester --peer=none --ipfs-connect=/ip4/127.0.0.1/tcp/5001 --private-internal-ipfs=false
Apart from a lot of warnings, which I'm sure people are aware of.. suddenly my private ipfs swarm has got lots of peers!
e.g.