bitcoin-core / secp256k1

Optimized C library for EC operations on curve secp256k1
MIT License
2.05k stars 998 forks source link

ci: Future of CI after Cirrus pricing change #1392

Open real-or-random opened 1 year ago

real-or-random commented 1 year ago

Roadmap (keeping this up to date):

I think the natural way forward for us is:

Possible follow-ups:

Other related PRs:


Corresponding Bitcoin Core issue: https://github.com/bitcoin/bitcoin/issues/28098

Cirrus CI will cap the community cluster, see cirrus-ci.org/blog/2023/07/17/limiting-free-usage-of-cirrus-ci. As with Core, the pricing model makes it totally unreasonable to pay for compute credits (multiple thousand USD / month).

The plan in Bitcoin Core is to move native Windows+macOS tasks to GitHub Actions, and move Linux tasks to persistent workers (=self-hosted). If I read the Bitcoin Core IRC meeting notes correctly, @MarcoFalke said these workers will also be available for libsecp256k1.

But the devil is in the details:

For macOS, we need to take also #1153 into account. It seems that GitHub-hosted macOS runners are on x86_64. The good news is that Valgrind should work again then, but the (very) bad is that this will reduce our number of native ARM tasks to zero. We still have some QEMU tasks, but we can't even the run the Valgrind cttimetests on them (maybe this would now work with MSan?!) @MarcoFalke Are the self-hosted runners only x86_64?

For Linux tasks, the meeting notes say that the main reason for using persistent workers is that some tasks require a very specific environment (e.g., the USDT ASan job). I don't think we have such requirements, so I tend to think that moving everything to GitHub Actions is a bit cleaner for us. With a persistent worker, Cirrus CI anyway acts only as a "coordination layer" between the worker and GitHub. Yet another way is to the self-hosted runners with GitHub Actions, see my comment https://github.com/bitcoin/bitcoin/issues/28098#issuecomment-1665661274).

maflcko commented 1 year ago

Are the self-hosted runners only x86_64?

There is one aarch64 one. (It is required because GitHub doesn't offer aarch64 Linux boxes, and Google Cloud doesn't offer an aarch64 CPU that can run armhf 32-bit binaries)

real-or-random commented 1 year ago

Ok, then it probably makes sense to do what I suggested in #1153, namely move ARM tasks to Linux, and reduce the number of our macOS tasks.

maflcko commented 1 year ago

moving everything to GitHub Actions is a bit cleaner for us

Sounds interesting. I wonder how (and if) docker images can be cached, along with ccache, etc...

real-or-random commented 1 year ago

moving everything to GitHub Actions is a bit cleaner for us

Sounds interesting. I wonder how (and if) docker images can be cached, along with ccache, etc...

Yeah, we'll need to see.

And I agree that "in the short run it seems easier to stick to Cirrus for now, because the diff is a lot smaller (just replace container: in the yml with persistent_worker:, etc)" (https://github.com/bitcoin/bitcoin/issues/28098#issuecomment-1665708491). We should probably do this first, and then see if we're interested in moving to GitHub Actions fully.

edit: I updated the roadmap above.

hebasto commented 1 year ago

For macOS, we need to take also #1153 into account. It seems that GitHub-hosted macOS runners are on x86_64. The good news is that Valgrind should work again then...

For such a case, it is good to see some progress in https://github.com/bitcoin-core/secp256k1/pull/1274 :)

hebasto commented 1 year ago

moving everything to GitHub Actions is a bit cleaner for us

Sounds interesting. I wonder how (and if) docker images can be cached, along with ccache, etc...

See https://github.com/bitcoin-core/secp256k1/pull/1396.

hebasto commented 1 year ago

There are open PRs for all of the mentioned items. It would be more productive, if we somehow prioritise them to spend our time until Sept. 1st more effectively.

maflcko commented 1 year ago

It would be more productive, if we somehow prioritise them to spend our time until Sept. 1st more effectively.

I'd say the Windows/macOS ones are probably easier, since they don't require write permission and don't have to deal with docker image caching.

real-or-random commented 1 year ago

Yes, we should in principle proceed in the order of the list above. But it doesn't need to be very strict. For example, if it turns out that #1396 is ready by Sep 1st, we can skip "Move Linux tasks to the Bitcoin Core persistent workers".

hebasto commented 1 year ago
  • [ ] Move Linux tasks to the Bitcoin Core persistent workers

It seems reasonable to split this task in two ones, depending on the underlying architecture: x86_64 and arm64, because the GitHub hosted runners lack support for arm64.

real-or-random commented 1 year ago

@hebasto Hm, we currently don't have native Linux arm64 jobs, so we can't "move" them over. We could add some (see #1163 and https://github.com/bitcoin-core/secp256k1/pull/1394#issuecomment-1671784065).

I tend to think that is also acceptable to wait for https://github.com/github/roadmap/issues/528, it's currently planned for the end of the year. Then we could move macOS back to ARM. Until that happens, perhaps we can add a QEMU jobs that run the ctimetests on MSan (clang-only) at least. Note to self: We need apt-get install libclang-rt-dev:arm64 and this works with

HOST="aarch64-linux-gnu" CC="clang --target=aarch64-linux-gnu" WRAPPER_CMD="qemu-aarch64"

(The real tests fail with msan enabled on qemu. I think this is because the stack will explode.)

I updated the list above with optional items.

maflcko commented 1 year ago

qemu-arm is a bit slower than native aarch64. You can use the already existing persistent worker, if you want:

https://github.com/bitcoin/bitcoin/blob/cd43a8444ba44f86ddbb313a03a2782482beda89/.cirrus.yml#L210-L212

(Currently not set up for this repo, but should be some time this week)

real-or-random commented 1 year ago

Sure, that's an easy option. I just think we're currently playing around with the idea to move everything to GHA, if it's feasible for this repo.

hebasto commented 1 year ago

While it worked on macOS Catalina back in time, it seems a couple of suppression for /usr/lib/libSystem.B.dylib and /usr/lib/dyld are needed.

Branch (POC) -- https://github.com/hebasto/secp256k1/tree/230824-valgrind CI -- https://github.com/hebasto/secp256k1/actions/runs/5967987235

real-or-random commented 1 year ago

Oh thanks for checking. Have you tried the supplied suppression file (https://github.com/LouisBrunner/valgrind-macos/blob/main/darwin19.supp)? If it doesn't solve the problem, we could try to upstream the additional suppressions, see also https://github.com/LouisBrunner/valgrind-macos/issues/15.

hebasto commented 1 year ago

Have you tried the supplied suppression file (LouisBrunner/valgrind-macos@main/darwin19.supp)?

Yes, I have. It does not change the outcome.

UPD. I used https://github.com/LouisBrunner/valgrind-macos/blob/main/darwin22.supp as we run Ventura.

real-or-random commented 1 year ago

Do you think maintaining the suppressions is a problem? I don't think it's a big deal.

UPD. I used LouisBrunner/valgrind-macos@main/darwin22.supp as we run Ventura.

Okay, sure, I got confused and looked at the wrong file.

hebasto commented 1 year ago

Do you think maintaining the suppressions is a problem? I don't think it's a big deal.

You mean, in this repository?

real-or-random commented 1 year ago

Do you think maintaining the suppressions is a problem? I don't think it's a big deal.

You mean, in this repository?

Yes... I don't think it will be a lot of work, but I guess we should still submit it upstream first. If they merge it quickly, then it's easiest for us. I can take care if you don't have the bandwidth.

hebasto commented 1 year ago

While it worked on macOS Catalina back in time, it seems a couple of suppression for /usr/lib/libSystem.B.dylib and /usr/lib/dyld are needed.

FWIW, it works with no additional suppressions on macos-12.

hebasto commented 1 year ago

I can take care if you don't have the bandwidth.

It would be nice because I have no x86_64 macOS Ventura available.

real-or-random commented 1 year ago

FWIW, it works with no additional suppressions on macos-12.

Oh ok, should we then just use this for now?

I can take care if you don't have the bandwidth.

It would be nice because I have no x86_64 macOS Ventura available.

I don't have any macOS available. ;)

hebasto commented 1 year ago

FWIW, it works with no additional suppressions on macos-12.

Oh ok, should we then just use this for now?

Done in https://github.com/bitcoin-core/secp256k1/pull/1412.

hebasto commented 1 year ago

Do you think maintaining the suppressions is a problem? I don't think it's a big deal.

You mean, in this repository?

Yes... I don't think it will be a lot of work, but I guess we should still submit it upstream first.

See https://github.com/LouisBrunner/valgrind-macos/pull/96 as a first step.

real-or-random commented 1 year ago
  • [ ] Add a task for ctimetest on ARM64/Linux/Valgrind on Cirrus CI using free minutes or the self-hosted runner

Hm, it appears that Cirrus' "Dockerfile as a CI environment" feature won't work with persistent workers (see #1418). Now that I think about it, that's somewhat expected (e.g., where should the built images be pushed?).

Alternatives:

I think we should do one of the last two?

maflcko commented 1 year ago

A persistent worker will persist the docker image itself, after the first run on the hardware. I think all you need to do is call

podman image --file $docker_file --name --env $bla --name $bla_image_name && podman container kill $ci_bla_name && podman run -it --rm --name $ci_bla_name $bla_image_name ./ci.sh

Alternatively it may be possible to find a sponsor to cover the cost (if it is not too high) on cirrus directly, while native arm64 isn't on GHA.

I can look at the llvm issue next week, if time permits.

real-or-random commented 1 year ago

A persistent worker will persist the docker image itself, after the first run on the hardware.

Thanks for chiming in. Wouldn't we also need to make sure that images get pruned from time to time? Or does podman handle this automatically?

podman image --file $docker_file --name --env $bla --name $bla_image_name && podman container kill $ci_bla_name && podman run -it --rm --name $ci_bla_name $bla_image_name ./ci.sh

I assume the first step performs the caching automatically, rebuildung layers only as necessary? Sorry, I'm not familiar with podman, I have only used Docker so far.

Alternatively it may be possible to find a sponsor to cover the cost (if it is not too high) on cirrus directly, while native arm64 isn't on GHA.

Right, yeah, I'm just not sure if I want to spend time on this.

I can look at the llvm issue next week, if time permits.

Ok sure, but I recommend not spending too much time on it. It also won't help with GCC (I added a note above).

maflcko commented 1 year ago

Thanks for chiming in. Wouldn't we also need to make sure that images get pruned from time to time? Or does podman handle this automatically?

Yeah, you can also run podman image prune, if you want. Pull requests to bitcoin-core/gui should already run it on the same machines, but that seems fragile to rely on.

See:

https://github.com/bitcoin-core/gui/blob/9d3b216e009a53ffcecd57e7f10df15cccd5fd6d/ci/test/04_install.sh#L30

I assume the first step performs the caching automatically, rebuildung layers only as necessary? Sorry, I'm not familiar with podman, I have only used Docker so far.

Yes, it is the same. You should be able to use docker as well, if you want, which is podman-docker.

Right, yeah, I'm just not sure if I want to spend time on this.

If you mean reaching out to a sponsor, I am happy to reach out, if there is a cost estimate.

real-or-random commented 1 year ago

Okay, then I think this approach is probably simpler than I expected. I'm not sure if I have the time this week, but I'll look into that soon. (Or @hebasto, if you want to give it a try, feel free to go ahead, of course. My plan was to simply "abuse" the existing Dockerfile to avoid maintaining a second one, at the cost of a somewhat larger image. The existing file should build fine except that debian won't let you install an arm64 cross-compiler on arm64. So we'd need to add some check to skip these packages when we're on arm64, see https://github.com/bitcoin-core/secp256k1/pull/1163/files#diff-751ef1d9fd31c5787e12221f590262dcf7d96cfb166d456e06bd0ccab115b60d .)

If you mean reaching out to a sponsor, I am happy to reach out, if there is a cost estimate.

Okay, thanks, but let's first try docker/podman then.

maflcko commented 12 months ago

Anything left to be done here?

real-or-random commented 11 months ago

The migration is done, but there are still a few unticked checkboxes. (And I've just added two.) None of them are crucial, but I plan to work on them soon, so I'd like to keep this open for now. We could also close this issue here and add a new tracking issue, or open separate issues for the remaining items, if people think that makes tracking easier.

hebasto commented 11 months ago

https://github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions/

maflcko commented 11 months ago

https://github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions/

"With today’s launch, our macOS larger runners will be priced at $0.16/minute for XL and $0.12/minute for large."

real-or-random commented 11 months ago

github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions

"With today’s launch, our macOS larger runners will be priced at $0.16/minute for XL and $0.12/minute for large."

This is a price decrease for private repos, and GHA remains free for public repos.

maflcko commented 11 months ago

Are large runners available for public repos?

real-or-random commented 11 months ago

Are large runners available for public repos?

Ha, okay, you're right. No, "larger runners" are always billed per minute, i.e., they're not free for public repos. And it seems that they're not planning to provide M1 "standard runners". At least https://github.com/github/roadmap/issues/528#issuecomment-1743546984 has been closed now. That means we should stick to the Cirrus runners for ARM.

real-or-random commented 7 months ago

And it seems that they're not planning to provide M1 "standard runners"

That has changed now: https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/