ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
15.83k stars 2.96k forks source link

Release v0.4.22 #6506

Closed Stebalien closed 4 years ago

Stebalien commented 4 years ago

go-ipfs 0.4.22 release

We're releasing a PATCH release of go-ipfs based on 0.4.21 containing some critical fixes.

The past several releases have been shaky and the network has scaled to the point where where small changes can have a wide-reaching impact on the entire network. To keep this situation from escalating, we've put a hold on releasing new features until we can improve our release process (which we will be trialing in this release) and testing procedures.

Current RC: v0.4.22-rc1 Install: ipfs update install v0.4.22-rc1

πŸ—Ί What's left for release

πŸ”¦ Changelog

This release includes fixes for the following regressions:

  1. A major bitswap throughput regression introduced in 0.4.21 (ipfs/go-ipfs#6442).
  2. High bitswap CPU usage when connected to many (e.g., 10,000) peers. See ipfs/go-libipfs#113.
  3. The local network discovery service sometimes initializing before the networking module, causing it to announce the wrong addresses and sometimes complain about not being able to determine the IP address) (ipfs/go-ipfs#6415).

It also includes fixes for:

  1. Pins not being persisted after ipfs block add --pin (ipfs/go-ipfs#6441).
  2. Concurrent map access on GC due to the pinner (ipfs/go-ipfs#6419).
  3. Potential pin-set corruption given a concurrent ipfs repo gc and ipfs pin rm (ipfs/go-ipfs#6444).
  4. Build failure due to a deleted git tag in one of our dependencies (ipfs/go-ds-badger#64).

βœ… Release Checklist

For each RC published in each stage:

Checklist:

❀️ Contributors

Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

⁉️ Do you have questions?

The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.


This release is currently being readied in #6484.

campoy commented 4 years ago

Getting some love when I was actually the cause of the breakage? You're a classy project πŸ˜„

❀️

Stebalien commented 4 years ago

@campoy I've done the same thing (deleted release tags). Thanks for jumping in and helping us fix the situation.

Stebalien commented 4 years ago

Stage 1... done! Stage 2...

Early testers:

  1. Please read https://github.com/ipfs/go-ipfs/blob/master/docs/EARLY_TESTERS.md (which now describes the expectations) and confirm that you're willing to participate (@obo20, I assumed you wanted to be on this list).

  2. go-ipfs v0.4.22-rc1 has passed all internal testing and is ready for public beta testing. Please try it out on your test infra (if any) and run your tests suites/apps against it.

  3. Please confirm when you've done all relevant testing so we can move on to stage 3.

This release adds no new features, just some critical fixes applied to v0.4.21. See the highlights in the issue description for a list of changes.

sanderpick commented 4 years ago

Call me dangerous, but we've been running ahead of the releases because of a pending cluster integration, which depends on go-libp2p-core. I'm not sure it makes sense to backpedal and test with v0.4.22-rc1, but I can give that a shot this weekend.

Stebalien commented 4 years ago

@sanderpick don't bother. All the changes in 0.4.22-rc1 are also in master. I'll count that as a sign-off.

obo20 commented 4 years ago

Things are working well on our end @Stebalien. Nothing major to report.

koalalorenzo commented 4 years ago

Thx @Stebalien, have some suggestion from somebody with both feet still on planet earth πŸ™ƒ

  1. Use Semantic Versioning to understand easily how changes are impacting our systems
  2. Include in the ChangeLog (the file) beta/rc versions so we know what to test ( see Orion's and Keep a Changelog)
  3. Is it possible to include the builds in GitHub when tagging rc or beta releases? (there is a feature to mark it as a "pre-release"). Reason: our pipelines are not downloading binaries from dist.ipfs.io because it is always failing (timeout mostly, due to being backed by IPFS), while GitHub is more reliable. πŸ€·β€β™‚οΈ

I still don't know why Protocol Labs has its own non-conventional things πŸ˜… and here is a cute cat:

cute-small-cat-wallpaper

bonedaddy commented 4 years ago

Looks good in CI tests. No new or negative issues spotted in our development environment so far.

How long do we have to give our final analysis? Ideally I'd like to test things for a week or so in dev but if that's too long thats understandable

Stebalien commented 4 years ago

Use Semantic Versioning to understand easily how changes are impacting our systems.

The next release with features will be v0.5.0 so we can clearly distinguish between patch releases and feature releases. For some context:

  1. 0.5.0 was supposed to be the official "beta" release of IPFS. Hence the whole "stuck on 0.4.x" thing.
  2. Historically, we've used minor releases to indicate major breaking changes (0.3.0 -> 0.4.0 broke network compatibility). This is actually pretty common in pre-1.0 software to clearly indicate breaking changes.

Include in the ChangeLog (the file) beta/rc versions so we know what to test ( see Orion's and Keep a Changelog)

There's a changelog in the release PR. The previously named "highlights" now named "changelog" section in the issue body is a complete changelog (sorry for the confusion).

Is it possible to include the builds in GitHub when tagging rc or beta releases? (there is a feature to mark it as a "pre-release").

Sure.

Reason: our pipelines are not downloading binaries from dist.ipfs.io because it is always failing (timeout mostly, due to being backed by IPFS), while GitHub is more reliable.

Is this still happening (i.e., since July)?

I still don't know why Protocol Labs has its own non-conventional things

Sometimes, because we have good reasons we haven't written down. Other times, :man_shrugging:. Never hesitate to ask.

obligatory cat (mine and therefore the best in the world) IMG_20190522_173855

Stebalien commented 4 years ago

@postables

How long do we have to give our final analysis? Ideally I'd like to test things for a week or so in dev but if that's too long thats understandable

Take your time. We'd like to get this out to users ASAP but we're also trying out our new release process here and we want to get this right.

bonedaddy commented 4 years ago

@Stebalien understandable. It looks good right now, CI builds pass, no apparent new issues in dev, and so far no noticeable regressions. However if possible I'd like to hold off on a final judgement for a few more days in case anything crops up.

obo20 commented 4 years ago

@Stebalien in response to your question:

Is this still happening (i.e., since July)?

We've been hitting this incredibly often and still do. It's got so bad that we started hosting our own copies of the binaries because dist.ipfs.io has gotten so bad (our ansible deployments were failing around 90% of the time due to timeouts)

The only reason we still encounter this problem is because we still have to initially pull new binary versions from dist.ipfs.io

Stebalien commented 4 years ago

Uploaded to GitHub.

bonedaddy commented 4 years ago

FWIW if you run into issues with sites like dist.ipfs.io you can quite easily load it up via a gateway like so: https://foo.bar/ipns/dist.ipfs.io

koalalorenzo commented 4 years ago

FWIW if you run into issues with sites like dist.ipfs.io you can quite easily load it up via a gateway like so: https://foo.bar/ipns/dist.ipfs.io

Most of the time it doesn't work as the content itself is not easy to be discovered, unless magic with DHT or a direct connection. :( Hopefully a new version fixes that :P

Stebalien commented 4 years ago

This new version won't fix that, it's just a patch release. We have some DHT patches that we believe will help once deployed to the entire network however, we're holding off till we can finish our test network so we can actually test how this code will affect the network.

ianopolous commented 4 years ago

@Stebalien in response to your question:

Is this still happening (i.e., since July)?

We've been hitting this incredibly often and still do. It's got so bad that we started hosting our own copies of the binaries because dist.ipfs.io has gotten so bad (our ansible deployments were failing around 90% of the time due to timeouts)

The only reason we still encounter this problem is because we still have to initially pull new binary versions from dist.ipfs.io

For what it's worth, we hit the same issues, hence: https://github.com/peergos/ipfs-releases/

Stebalien commented 4 years ago

Early testers,

It's been a bit over a week. Any new issues with the release and/or can we move on to stage 3?

b5 commented 4 years ago

tl;dr; LGTM

The Qri crew is completely tied up in non-IPFS stuff at the moment, leaving us little time to give proper feedback on this release cycle. I've taken a quick look at the changelog and everything is in keeping with what we've expected, so I'd rubber-stamp this as good-to-go.

We're very much looking forward to properly contributing to the early testing process on the next release. Please keep us in the loop!

Stebalien commented 4 years ago

@b5 SGTM. Thanks for the signoff.

bonedaddy commented 4 years ago

Totally forget to reply with my update, it looks good!

koalalorenzo commented 4 years ago

It looks good also for us on Siderus Orion client!

Stebalien commented 4 years ago

Stage 1... done! Stage 2... done! Stage 3...

Early testers:

We have entered stage 3 of our release process, the "soft" release. At this point, we consider this go-ipfs release to be production ready and we don't expect any more RCs at this point. Please deploy it on production infrastructure as you would a normal release. This stage allows us to rapidly fix any last-minute issues with the release without cutting an entirely new release.

When you're satisfied that 0.4.22-rc1 is at least as stable as 0.4.21, please sign off on this issue.

sanderpick commented 4 years ago

Same answer from me this time, @Stebalien. We're ahead of the release at the moment. Consider me a βœ”οΈ.

obo20 commented 4 years ago

Same for us @Stebalien. Mark us as good to go.

bonedaddy commented 4 years ago

Looks good from my end, and even have a noticeable (albeit small) drop in CPU utilization :rocket: lookin_good

Stebalien commented 4 years ago

That's probably:

High bitswap CPU usage when connected to many (e.g., 10,000) peers. See ipfs/go-libipfs#113.

Good to know it helped.

bonedaddy commented 4 years ago

That would definitely cause it, good to know its working!

For what it's worth as well, there also appears to also be an improvement to memory usage as well. This is looking like a great release so far :rocket: (this graph display free memory, as opposed to consumed memory)

koalalorenzo commented 4 years ago

Planned to deploy it tomorrow ( ~10:00 CET )

Stebalien commented 4 years ago

@koalalorenzo :fire: or :sunglasses:?

Stebalien commented 4 years ago

Stage 3 done. Stage 4...

Building and releasing today (hopefully).

hacdias commented 4 years ago

@Stebalien status on this?

Stebalien commented 4 years ago

Built but we're waiting on some blog post stuff. We may release first if we can't get everything ready in time.

rklaehn commented 4 years ago

Just wanted to let you know that I very much agree with the decision put a hold on releasing new features until there is a process to ensure that the existing features work reliably...

Retia-Adolf commented 4 years ago

non-Windows binary in go-ipfs_v0.4.22_windows-amd64.zip :|

Stebalien commented 4 years ago

@Retia-Adolf thanks for the report. This should be fixed now and I apologize for flubbing it.

andrewheadricke commented 4 years ago

darwin amd64 build looks borked.

11:31:21.989 ERROR   cmd/ipfs: error from node construction:  could not build arguments for function "reflect".makeFuncStub (/usr/lib/go/src/reflect/asm_amd64.s:12): failed to build provider.Provider: could not build arguments for function "github.com/ipfs/go-ipfs/core/node".ProviderCtor (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:24): failed to build *provider.Queue: function "github.com/ipfs/go-ipfs/core/node".ProviderQueue (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:19) returned a non-nil error: strconv.ParseUint: parsing "1565442853283077000/b": value out of range daemon.go:337

Error: could not build arguments for function "reflect".makeFuncStub (/usr/lib/go/src/reflect/asm_amd64.s:12): failed to build provider.Provider: could not build arguments for function "github.com/ipfs/go-ipfs/core/node".ProviderCtor (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:24): failed to build *provider.Queue: function "github.com/ipfs/go-ipfs/core/node".ProviderQueue (pkg/mod/github.com/ipfs/go-ipfs@v0.4.22/core/node/provider.go:19) returned a non-nil error: strconv.ParseUint: parsing "1565442853283077000/b": value out of range
Stebalien commented 4 years ago

@andrewheadricke you've downgraded from master to 0.4.22. Master includes some new patches (and probably needs an explicit repo migration).

andrewheadricke commented 4 years ago

Thanks @Stebalien I deleted my .ipfs directory and now its working.