ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
15.83k stars 2.96k forks source link

Go-IPFS 0.5.0 Release #7109

Closed Stebalien closed 4 years ago

Stebalien commented 4 years ago

go-ipfs 0.5.0 Release

Release: https://dist.ipfs.io#go-ipfs

We're happy to announce go-ipfs 0.5.0, ...

🗺 What's left for release

🔦 Highlights

UNDER CONSTRUCTION

This release includes many important changes users should be aware of.

New DHT

This release includes an almost completely rewritten DHT implementation with a new protocol version. From a user's perspective, providing content, finding content, and resolving IPNS records should simply get faster. However, this is a significant (albeit well tested) change and significant changes are always risky, so heads up.

Old v. New

The current DHT suffers from three core issues addressed in this release:

  1. Most peers in the DHT cannot be dialed (e.g., due to firewalls and NATs). Much of a DHT query time is wasted trying to connect to peers that cannot be reached.
  2. The DHT query logic doesn't properly terminate when it hits the end of the query and, instead, aggressively keeps on searching.
  3. The routing tables are poorly maintained. This can cause a search that should be logarithmic in the size of the network to be linear.
Reachable

We have addressed the problem of undialable nodes by having nodes wait to join the DHT as "server" nodes until they've confirmed that they are reachable from the public internet. Additionally, we've introduced:

Unfortunately, there's a significant downside to this approach: VPNs, offline LANs, etc. where all nodes on the network have private IP addresses and never communicate over the public internet. In this case, none of these nodes would be "publicly reachable".

To address this last point, go-ipfs 0.5.0 will run two DHTs: one for private networks and one for the public internet. That is, every node will participate in a LAN DHT and a public WAN DHT.

RC2 NOTE: All the features not enabled in RC1 have been enabled in RC2.

RC1 NOTE: Several of these features have not been enabled in RC1:

  1. We haven't yet switched the protocol version and will be running the DHT in "compatibility mode" with the old DHT. Once we flip the switch and enable the new protocol version, we will need to ensure that at least 20% of the publicly reachable DHT speaks the new protocol, all at once. The plan is to introduce a large number of "booster" nodes while the network transitions.
  2. We haven't yet introduced the split LAN/WAN DHTs. We're still testing this approach and considering alternatives.
  3. Because we haven't introduced the LAN/WAN DHT split, IPFS nodes running in DHT server mode will continue to run in DHT server mode without waiting to confirm that they're reachable from the public internet. Otherwise, we'd break IPFS nodes running DHTs in VPNs and disconnected LANs.
Query Logic

We've fixed the DHT query logic by correctly implementing Kademlia (with a few tweaks). This should significantly speed up:

In both cases, we now continue till we find the closest peers then stop.

Routing Tables

Finally, we've addressed the poorly maintained routing tables by:

Testing

The DHT rewrite was made possible by our new testing framework, testground, which allows us to spin up multi-thousand node tests with simulated real-world network conditions. With testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.

Refactored Bitswap

This release includes a major bitswap refactor running a new, but backwards compatible, bitswap protocol. We expect these changes to improve performance significantly.

With the refactored bitswap, we expect:

Note, the new bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement, if any.

Provider Record Changes

When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this "providing".

However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv1 and CIDv2) and (b) with different "codecs" depending on how we're interpreting the data.

Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.

In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

IPFS/Libp2p Address Format

If you've ever run a command like ipfs swarm peers, you've likely seen paths that look like /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID. These paths are not file paths, they're multiaddrs; addresses of peers on the network.

Unfortunately, /ipfs/Qm... is also the same path format we use for files. This release, changes the multiaddr format from /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID to /ip4/193.45.1.24/tcp/4001/p2p/QmSomePeerID to make the distinction clear.

What this means for users:

Minimum RSA Key Size

Previously, IPFS did not enforce a minimum RSA key size. In this release, we've introduced a minimum 2048 bit RSA key size. IPFS generates 2048 bit RSA keys by default so this shouldn't be an issue for anyone in practice. However, users who explicitly chose a smaller key size will not be able to communicate with new nodes.

Unfortunately, the some of the bootstrap peers did intentionally generate 1024 bit RSA keys so they'd have vanity peer addresses (starting with QmSoL for "solar net"). All IPFS nodes should also have peers with >= 2048 bit RSA keys in their bootstrap list, but we've introduced a migration to ensure this.

We implemented this change to follow security best practices and to remove a potential foot-gun. However, in practice, the security impact of allowing insecure RSA keys should have been next to none because IPFS doesn't trust other peers on the network anyways.

Subdomain Gateway

The gateway will redirect from http://localhost:5001/ipfs/CID/... to http://CID.ipfs.localhost:5001/... by default. This will:

Paths addressing the gateway by IP address (http://127.0.0.1:5001/ipfs/CID) will not be altered as IP addresses can't have subdomains.

Note: cURL doesn't follow redirects by default. To avoid breaking cURL and other clients that don't support redirects, go-ipfs will return the requested file along with the redirect. Browsers will follow the redirect and abort the download while cURL will ignore the redirect and finish the download.

TLS By Default

In this release, we're switching TLS to be the default transport. This means we'll try to encrypt the connection with TLS before re-trying with SECIO.

Contrary to the announcement in the go-ipfs 0.4.23 release notes, this release does not remove SECIO support to maintain compatibility with js-ipfs.

SECIO Deprecation Notice

SECIO should be considered to be well on the way to deprecation and will be completely disabled in either the next release (0.6.0, ~mid May) or the one following that (0.7.0, ~end of June). Before SECIO is disabled, support will be added for the NOISE transport for compatibility with other IPFS implementations.

QUIC Upgrade

If you've been using the experimental QUIC support, this release upgrades to a new and incompatible version of the QUIC protocol (draft 27). Old and new go-ipfs nodes will still interoperate, but not over the QUIC transport.

We intend to standardize on this draft of the QUIC protocol and enable QUIC by default in the next release if all goes well.

RC2 NOTE: QUIC has been upgraded back to the latest version.

RC1 NOTE: We've temporarily backed out of the new QUIC version because it currently requires go 1.14 and go 1.14 has some scheduler bugs that go-ipfs can reliably trigger.

Badger Datastore

In this release, we're calling the badger datastore (enabled at initialization with ipfs init --profile=badgerds) as stable. However, we're not yet enabling it by default.

The benefit of badger is that adding/fetching data to/from badger is significantly faster than adding/fetching data to/from the default datastore, flatfs. In some tests, adding data to badger is 32x faster than flatfs (in this release).

However,

  1. Badger is complicated while flatfs pushes all the complexity down into the filesystem itself. That means that flatfs is only likely to loose your data if your underlying filesystem gets corrupted while there are more opportunities for badger itself to get corrupted.
  2. Badger can use a lot of memory. In this release, we've tuned badger to use very little (~20MiB) of memory by default. However, it can still produce large (1GiB) spikes in memory usage when garbage collecting.
  3. Badger isn't very aggressive when it comes to garbage collection and we're still investigating ways to get it to more aggressively clean up after itself.

TL;DR: Use badger if performance is your main requirement, you rarely/never delete anything, and you have some memory to spare.

Systemd Support

For Linux users, this release includes support for two systemd features: socket activation and startup/shutdown notifications. This makes it possible to:

You can find the new systemd units in the go-ipfs repo under misc/systemd.

IPFS API Over Unix Domain Sockets

This release supports exposing the IPFS API over a unix domain socket in the filesystem. You use this feature, run:

> ipfs config Addresses.API "/unix/path/to/socket/location"

Repo Migration

IPFS uses repo migrations to make structural changes to the "repo" (the config, data storage, etc.) on upgrade.

This release includes two very simple repo migrations: a config migration to ensure that the config contains working bootstrap nodes and a keystore migration to base32 encode all key filenames.

In general, migrations should not require significant manual intervention. However, you should be aware of migrations and plan for them.

Otherwise, if you want more control over the repo migration process, you can manually install and run the repo migration tool.

Bootstrap Peer Changes

AUTOMATIC MIGRATION REQUIRED

The first migration will update the bootstrap peer list to:

  1. Replace the old bootstrap nodes (ones with peer IDs starting with QmSoL), with new bootstrap nodes (ones with addresses that start with /dnsaddr/bootstrap.libp2p.io.
  2. Rewrite the address format from /ipfs/QmPeerID to /p2p/QmPeerID.

We're migrating addresses for a few reasons:

  1. We're using DNS to address the new bootstrap nodes so we can change the underlying IP addresses as necessary.
  2. The new bootstrap nodes use 2048 bit keys while the old bootstrap nodes use 1024 bit keys.
  3. We're normalizing the address format to /p2p/Qm....

Note: This migration won't add the new bootstrap peers to your config if you've explicitly removed the old bootstrap peers. It will also leave custom entries in the list alone. In other words, if you've customized your bootstrap list, this migration won't clobber your changes.

Keystore Changes

AUTOMATIC MIGRATION REQUIRED

Go-IPFS stores additional keys (i.e., all keys other than the "identity" key) in the keystore. You can list these keys with ipfs key.

Currently, the keystore stores keys as regular files, named after the key itself. Unfortunately, filename restrictions and case-insensitivity are platform specific. To avoid platform specific issues, we're base32 encoding all key names and renaming all keys on-disk.

Changelog

TODO

✅ Release Checklist

For each RC published in each stage:

Checklist:

❤️ Contributors

< list generated by bin/mkreleaselog >

Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

⁉️ Do you have questions?

The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.

bonedaddy commented 4 years ago

Merge #6870 (punted till after the RC as these changes should be pretty safe).

This seems a little iffy. Non-thoroughly tested functionality is only safe after its thoroughly tested until then assuming its safe just increases the likelihood of issues

ribasushi commented 4 years ago

@bonedaddy it's an addition to the experimental ipfs dag command section, it doesn't affect anything in the actual daemon operations. Originally it was re-slated for 0.6, but an interop concern is making this bubble up again,

bonedaddy commented 4 years ago

@bonedaddy it's an addition to the experimental ipfs dag command section, it doesn't affect anything in the actual daemon operations. Originally it was re-slated for 0.6, but an interop concern is making this bubble up again,

Dependencies can be pulled in which may have potentially adverse effects. Personally speaking letting untested functionality be included releases is a bad practice that shouldn't be done. For ex: what if the dag imnport/export commands suffer from a bug themselves that would be discovered via testing through the RC process? The long term effect of not having properly tested functionality going through the RC process is more work needing to be done in dealing with issues, and fixing issues.

ex: lets imagine we're at a time when go-ipfs is at v1.0.0; Will these practices of untested functionality being included in releases continue? If not, why not make changes now that result in better testing, and better change management. Nothing is lost, but everything is gained.

ribasushi commented 4 years ago

@bonedaddy sorry, I now see the confusion. The text should have read not included in RC1. There is another RC coming up towards the end of the week, we just needed to get the DHT parts there as soon as possible to gain more feedback.

bonedaddy commented 4 years ago

@bonedaddy sorry, I now see the confusion. The text should have read not included in RC1. There is another RC coming up towards the end of the week, we just needed to get the DHT parts there as soon as possible to gain more feedback.

Ah okay, good stuff :rocket:

jbenet commented 4 years ago

jbenet commented 4 years ago

FYI, companion and ipfs desktop broke because of https://github.com/ipfs/go-ipfs/commit/1b490476e5517931b8d31a6636e7008771db201d -- looks like @lidel's on it because firefox companion is fixed, but chrome hasn't gotten a new version. sounds like this will break a lot of users, so you want to definitely flag it in the release notes, and here, very prominently. It took me a while to figure out what was wrong.

jbenet commented 4 years ago

lidel commented 4 years ago

Unfortunately Chrome Web Store is super slow with accepting updates.

Chromium users need to wait for the Stable channel update to v2.11.0, or uninstall it and install Beta, which just got approved: v2.11.0.904

ianopolous commented 4 years ago

I'm getting a http 405 from http://127.0.0.1:5001/api/v0/id We do a GET request. In my opinion it doesn't make sense that this needs to be a POST. Do all api calls now have to be POSTs? If so, that should be highlighted here as that's a breaking change. EDIT: After discovering that browsers let random websites make GETs but not POSTs to localhost this makes sense.

lidel commented 4 years ago

Yes, all calls to RPC API at /api/v0 on the API port need to be POST now.

ianopolous commented 4 years ago

After switching to POSTs all our local tests are passing, including p2p stream tests.

Thumbs up!

lidel commented 4 years ago

(I created issues in HTTP client repos listed at ipfs/ipfs#http-client-libraries just to be sure maintainers are aware of POST-only API change)

bonedaddy commented 4 years ago

I got the following panic trying to run my ipfs-cluster benchmark suite. It occured while trying to add a file to a docker container

anic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x159c3ba]

goroutine 75 [running]:
github.com/dgraph-io/badger/skl.(*Skiplist).IncrRef(...)
    pkg/mod/github.com/dgraph-io/badger@v1.6.1/skl/skl.go:86
github.com/dgraph-io/badger.(*DB).getMemTables(0xc000075180, 0x0, 0x0, 0x0, 0x0)
    pkg/mod/github.com/dgraph-io/badger@v1.6.1/db.go:489 +0xea
github.com/dgraph-io/badger.(*Txn).NewIterator(0xc000120d00, 0x1, 0x64, 0x0, 0xc00057ce20, 0x13, 0x20, 0x0, 0x0)
    pkg/mod/github.com/dgraph-io/badger@v1.6.1/iterator.go:454 +0x75
github.com/ipfs/go-ds-badger.(*txn).query(0xc000369de0, 0xc00057c980, 0x12, 0x0, 0x0, 0x0, 0xc00039e190, 0x1, 0x1, 0x1, ...)
    pkg/mod/github.com/ipfs/go-ds-badger@v0.2.3/datastore.go:622 +0x1f1
github.com/ipfs/go-ds-badger.(*Datastore).Query(0xc000122780, 0xc00057c980, 0x12, 0x0, 0x0, 0x0, 0xc00039e190, 0x1, 0x1, 0x1, ...)
    pkg/mod/github.com/ipfs/go-ds-badger@v0.2.3/datastore.go:350 +0x139
github.com/ipfs/go-ds-measure.(*measure).Query(0xc00022c300, 0xc00057c980, 0x12, 0x0, 0x0, 0x0, 0xc00039e190, 0x1, 0x1, 0x1, ...)
    pkg/mod/github.com/ipfs/go-ds-measure@v0.1.0/measure.go:246 +0x12c
github.com/ipfs/go-ds-measure.(*measure).Query(0xc00022c600, 0xc00057c980, 0x12, 0x0, 0x0, 0x0, 0xc00039e190, 0x1, 0x1, 0x1, ...)
    pkg/mod/github.com/ipfs/go-ds-measure@v0.1.0/measure.go:246 +0x12c
github.com/ipfs/go-datastore/keytransform.(*Datastore).Query(0xc000368a20, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc00039e190, 0x1, 0x1, 0x1, ...)
    pkg/mod/github.com/ipfs/go-datastore@v0.4.4/keytransform/keytransform.go:71 +0x1d5
github.com/ipfs/go-ipfs-provider/queue.(*Queue).getQueueHead(0xc0003f1c70, 0x0, 0x0, 0x0)
    pkg/mod/github.com/ipfs/go-ipfs-provider@v0.4.2/queue/queue.go:144 +0x111
github.com/ipfs/go-ipfs-provider/queue.(*Queue).work.func1(0xc0003f1c70)
    pkg/mod/github.com/ipfs/go-ipfs-provider@v0.4.2/queue/queue.go:90 +0x7a3
created by github.com/ipfs/go-ipfs-provider/queue.(*Queue).work
    pkg/mod/github.com/ipfs/go-ipfs-provider@v0.4.2/queue/queue.go:75 +0x3f

getting a lot of these errors from my benchmark suite:

2020-04-07T18:53:49.670-0700    ERROR   provider.queue  queue/queue.go:124  Failed to enqueue cid: datastore closed
github.com/ipfs/go-ipfs-provider/queue.(*Queue).work.func1
    pkg/mod/github.com/ipfs/go-ipfs-provider@v0.4.2/queue/queue.go:124
Qmc4isSGqrbt8zzsAB9rdAwpU7cBiMPWunyaFGPWP3WC6h :
Stebalien commented 4 years ago

Looks like https://github.com/ipfs/go-ipfs/issues/6986. Thanks for the report, I've added it to the TODO.

jbenet commented 4 years ago

would be great to get that benchmark suite as a test we can run, specially on CI

bonedaddy commented 4 years ago

Looks like #6986. Thanks for the report, I've added it to the TODO.

no problem will post if i find more

would be great to get that benchmark suite as a test we can run, specially on CI

its really nothing special just spam the cluster and ipfs with a ton of adds. I've pulled out the code from our closed-source repos and published it on git unfortunately its AGPLv3 so I'm not sure that's compatible with the ipfs codebase. Feel free to model something similar

https://github.com/RTradeLtd/xreplb

khinsen commented 4 years ago

The release mention a repo update, so a quick test run for testing my Pharo interface will mess up my working 0.4.23 installation, right? Is there a simple way to avoid this?

Stebalien commented 4 years ago

@khinsen the simple solution is to copy the repo. However, downgrading to a previous version of go-ipfs should also downgrade the repo version automatically. But I'd still make a backup.

khinsen commented 4 years ago

@Stebalien OK, I'll do a back and check later if it was really necessary :-) Thanks!

Stebalien commented 4 years ago

Belated ping to early testers (although I believe you've all been informed through other channels):

b5 commented 4 years ago

Finished the first round of upgrading dependencies. Safe to say this has been the easiest major upgrade to date. Impact to our test suite is minimal, nearly all interfaces we depend on have remained the same, save for odd context argument addition here & there, which are welcome changes. I had enough time left over to sew repo migration execution directly into our binary.

I used to set aside 4 days to do IPFS dependency upgrades (in the GX days, before core interface). This one took a day. Delighted.

Tracking issue on the Qri side: https://github.com/qri-io/qri/issues/1225

Stebalien commented 4 years ago

That's great to hear! Yeah, while go mod has some very sharp edges, it does make upgrading, forking, etc. easier.

Stebalien commented 4 years ago

0.5.0-RC2 has been released: https://dist.ipfs.io/go-ipfs/v0.5.0-rc2

Please test.

Changes between RC1 and RC2

Other than bug fixes, the following major changes were made between RC1 and RC2.

QUIC Upgrade

In RC1, we downgraded to a previous version of the (experimental) QUIC transport so we could build on go 1.13. In RC2, our QUIC transport was patched to support go 1.13 so we've upgraded back to the latest version.

NOTE: The latest version implements a different and incompatible draft (draft 27) of the QUIC protocol than the previous RC and go-ipfs 0.4.23. In practice, this shouldn't cause any issues as long as your node supports transports other than QUIC (also necessary to communicate with the vast majority of the network).

DHT "auto" mode

In this RC, the DHT will not enter "server" mode until your node determines that it is reachable from the public internet. This prevents unreachable nodes from polluting the DHT. Please read the "New DHT" section in the issue body for more info.

AutoNAT

IPFS has a protocol called AutoNAT for detecting whether or not a node is "reachable" from the public internet. In short:

  1. An AutoNAT client asks a node running an AutoNAT service if it can be reached at one of a set of guessed addresses.
  2. The AutoNAT service will attempt to "dialback" those addresses (with some restrictions, e.g., we won't dial back to a different IP address).
  3. If the AutoNAT service succeeds, it will report back the address it successfully dialed and the AutoNAT client will now know that it is reachable from the public internet.

In go-ipfs 0.5, all nodes act as AutoNAT clients to determine if they should switch into DHT server mode.

As of this RC, all nodes (except new nodes initialized with the "lowpower" config profile) will also run a rate-limited AutoNAT service by default. This should have minimal overhead but we may change the defaults in RC3 (e.g., rate limit further or only enable the AutoNAT service on DHT servers).

In addition to enabling the AutoNAT service by default, this RC changes the AutoNAT config options around:

  1. It removes the Swarm.EnableAutoNATService option.
  2. It Adds an AutoNAT config section (empty by default). This new section is documented in docs/config.md along with the rest of the config file.

LAN/WAN DHT

As forwarned in the RC1 release notes, RC2 includes the split LAN/WAN DHT. All IPFS nodes will now run two DHTs: one for the public internet (WAN) and one for their local network (LAN).

This feature should not have any noticeable (performance or otherwise) impact and go-ipfs should continue to work in all the currently supported network configurations: VPNs, disconnected LANs, public internet, etc.

In a future release, we hope to use this feature to limit the advertisement of private addresses to the local LAN.

makew0rld commented 4 years ago

I'm a part of @tomeshnet, where we do a lot of experimentation with mesh networks, and using IPFS on those networks. I'm wondering how this upcoming release will affect our usage, because in many of our test networks, we have nodes that are not connected to the public internet, but are instead connected to nearby nodes, using software like CJDNS or Yggdrasil. These nodes won't be under LAN address space, and so I'm wondering how this new dual DHT setup will work in this situation. Is there a way these nodes can have a "public" DHT for the mesh network, even though they're not connected to the Internet? Preferably without needing centralized bootstrap servers.

In terms of DHT auto mode, and Auto NAT, I assume those features can be changed? That the DHT server mode can be forced, and that Auto NAT can be disabled.

willscott commented 4 years ago

@makeworld-the-better-one Our current approach is that nodes will assume they're publicly reachable until through the auto NAT probing they start learning that most of the other nodes in the DHT can't actually dial them. There are config options exposed at the libP2P and IPFS level to force AutoNAT to report either "I believe this node is externally connectable" or "I believe this node is behind a NAT" without probing.

bonedaddy commented 4 years ago

I keep getting the follower errror after upgrading to 0.5.0-rc2, and I was running 0.5.0-rc1 perfectly fine, the lock file that it thinks exists at this path does not exist:

Error: lock /data/ipfs/repo.lock: someone else has the lock
$ ls -l /data/ipfs/repo.lock
ls: cannot access '/data/ipfs/repo.lock': No such file or directory
makew0rld commented 4 years ago

@willscott In a situation where the network is not LAN, has no connection to the public Internet, but the nodes in a network are connected, and connected (manually) at the IPFS level too, what settings do you recommend? How can we make sure these nodes form a DHT with each other?

Our current approach is that nodes will assume they're publicly reachable until through the auto NAT probing they start learning that most of the other nodes in the DHT can't actually dial them.

To me this sounds like it could work out of the box, because the "other nodes in the DHT" would just be nodes in the mesh network, which would be able to reach them. But I'm worried if the Auto NAT probing will mean these nodes try and connect to a public server (that will be inaccessible), and then switch to LAN DHT only. For example, are there default AutoNAT servers, like bootstrap servers? What happens if those can't be found?

Stebalien commented 4 years ago

@willscott

The AutoNAT service will continue to operate while in the "unknown" reachability state, but the WAN DHT still won't start until it knows it's reachable.

@makeworld-the-better-one

The AutoNAT service is peer-to-peer, none of your nodes will need to contact any "well known" nodes.

If all your nodes are using routable IP addresses, it should "just work". They'll all use AutoNAT to determine that they're reachable by other nodes in your network, then move the WAN DHT into server mode.

If all of your nodes were using unroutable IP addresses, it would also work because we'd use the LAN DHT, which doesn't rely on AutoNAT. I believe CJDNS uses unroutable IP addresses.

@bonedaddy

Could you file an issue and try running go-ipfs with strace -f -e trace=open?

makew0rld commented 4 years ago

@Stebalien Thanks for the reply, that clears some things up. CJDNS uses IP addresses in the fc00::/8 address space, which is routable, just not routable on the public internet (see RFC4193). Yggdrasil uses 0200::/7 which is deprecated and has no official use. Would IPFS consider these addresses "routable"? And if not, wouldn't using the LAN DHT exclude nodes with these addresses, because it only applies to LAN address space like 192.168.0.0/16, etc?

willscott commented 4 years ago

@makeworld-the-better-one The fc00:: space is considered private, and will use the LAN DHT: https://github.com/multiformats/go-multiaddr-net/blob/master/private.go#L26

The Yggdrasil space is not considered private, and would make use of the WAN DHT when nodes don't determine they are unconnectable (which could happen if they have other interfaces bridged to the public IPFS network)

makew0rld commented 4 years ago

And will the LAN DHT include nodes in the fc00:: address space too then? Or will it stick to the regular LAN definition with fe80, etc.

willscott commented 4 years ago

Yes. Our naming of LAN here encompasses the broader "private address space" definition, including fc00::

Stebalien commented 4 years ago

Or will it stick to the regular LAN definition with fe80, etc.

fe80 is link-local, not LAN. Really, there is no "LAN" address space, just routable and non-routable (on the public internet). The "LAN" DHT is really just the "not routable on the public internet" DHT.

makew0rld commented 4 years ago

Alright, thanks! The only question I have now is whether there's a way to disable the setting that means nodes "will only publish records (provider and IPNS) to the WAN DHT". In a situation where a node is connected to the Internet and CJDNS, for example, we'd want them to still be publishing records to the LAN (CJDNS).

makew0rld commented 4 years ago

Or more specifically, we'd want them to publish any of their own records to the LAN, but not WAN records that a CJDNS-only node couldn't access.

Stebalien commented 4 years ago

The only question I have now is whether there's a way to disable the setting that means nodes "will only publish records (provider and IPNS) to the WAN DHT".

Not at the moment.

In a situation where a node is connected to the Internet and CJDNS, for example, we'd want them to still be publishing records to the LAN (CJDNS).

Just be careful not to do this with two split networks with routable IP addresses.

makew0rld commented 4 years ago

Do you think that feature could be added? That way, an Internet connected node would be able to share files with a CJDNS-only node it's connected with.

Just be careful not to do this with two split networks with routable IP addresses.

Hmm. This is what would happen on a node connected to the Internet and Yggdrasil. Do you have any recommendations?

bonedaddy commented 4 years ago

I keep getting the follower errror after upgrading to 0.5.0-rc2, and I was running 0.5.0-rc1 perfectly fine, the lock file that it thinks exists at this path does not exist:

Error: lock /data/ipfs/repo.lock: someone else has the lock
$ ls -l /data/ipfs/repo.lock
ls: cannot access '/data/ipfs/repo.lock': No such file or directory

any idea what's causing this?

makew0rld commented 4 years ago

@bonedaddy: @Stebalien responded to you above.

bonedaddy commented 4 years ago

thanks didnt see that.

edit:

:shrug: not sure what the issue was its working now, perhaps the RAID card didnt fully flush the data

Stebalien commented 4 years ago

@makeworld-the-better-one

Do you think that feature could be added? That way, an Internet connected node would be able to share files with a CJDNS-only node it's connected with.

Not in this release, unfortunately. You'll have to file a feature request. We can continue this discussion there.

makew0rld commented 4 years ago

Will do, thanks.

makew0rld commented 4 years ago

I've made two, one for what I mentioned above, and one about split networks. Thanks for the help!

7168

7169

RubenKelevra commented 4 years ago

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation?

meeDamian commented 4 years ago

Running TEST_NO_FUSE=1 make test_short fails on t0054 #23 fifo import test:

Stebalien commented 4 years ago

@RubenKelevra

Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation?

That would make it harder to upgrade. The upgrade will impact finding content added with CIDv1, it will not impact finding content added with CIDv0.

Stebalien commented 4 years ago

@meeDamian off topic, don't use go 1.14. Use either 1.13.x or 1.14.2. Go 1.14 has several known bugs that will cause go-ipfs to hang.

Otherwise, that looks like a bug, probably a bug in the test. Thanks for finding it! Please file a new issue.

meeDamian commented 4 years ago

@meeDamian off topic, don't use go 1.14.

Thank you, and apologies for not being clear. go 1.14 on Dockerhub currently resolves to 1.14.2.

That being said, I'll try with explicit latest versions of 1.13.x and 1.14.x first 🙂.

RubenKelevra commented 4 years ago

@RubenKelevra

Is there a way to set CIDv1 as default for all operations, to make it easier to upgrade, without having to specify this on each and every operation?

That would make it harder to upgrade. The upgrade will impact finding content added with CIDv1, it will not impact finding content added with CIDv0.

I understand what you mean, but I don't care about the compability with 0.4.x

When I add files thru the cluster-ctl, I end up with a lot of CIDs which are v1 anyway.

I just want it to work either completely or not at all - the mixture of CIDv1/0 is hard to debug:

If the requested initial CID is a v0 bitswap will make it possible to fetch any further content, while when the initial CID is v1 it won't work.

I just want to avoid using CIDv0 for anything at this time, just to make sure when I save content for the next year, I don't have to remove and readd it.