Closed player-03 closed 6 months ago
At this point, it's possible to complete everything else while waiting for the brew bundle
step of macos-ndll.
Credit to @Apprentice-Alchemist for figuring out why Homebrew was so slow.
Workflow succeeded in 25 minutes, compared to ~1h45m before this PR.
Great to see the CI finishing faster! I'm concerned about one thing that you changed, though. The removal of this dependency from the samples builds:
needs: package-haxelib
I made the samples builds rely on package-haxelib on purpose. The idea was to ensure that the Haxelib .zip bundle that we're generating is valid, and can build projects for all supported targets.
I see where you're coming from. On the one hand, with the brew issue fixed (for now), going back to the package-haxelib bottleneck shouldn't slow things down too much. On the other hand, it has been packaging correctly for years, it would throw an error if any of the pieces were missing, and we're testing the pieces individually. I don't see how it could start going wrong now.
How about this as a compromise? We test each target as soon as the required ndlls become available, saving time. Later when package-haxelib finishes, we test its output by downloading the bundle on Windows/Mac/Linux, and building a sample in HashLink. (HashLink because all we're trying to do is prove that the bundled version of Lime works.)
How about this as a compromise? We test each target as soon as the required ndlls become available, saving time. Later when package-haxelib finishes, we test its output by downloading the bundle on Windows/Mac/Linux, and building a sample in HashLink. (HashLink because all we're trying to do is prove that the bundled version of Lime works.)
Works for me!
Ok, the variables gave me a fair bit of trouble, and I think the 25-minute run might have been a fluke, but it's working.
Hmm, some of the Neko runs take a while to download the artifact. Maybe there are just too many in parallel. Could scale it back, I guess.
Ugh, reducing the number of jobs didn't even help. All of the time saved was because the iOS builds happened to go a bit faster.
As an experiment, let's try going back to hard-coded runs-on
values. It's possible GitHub optimizes based on knowing in advance which machines to use, and that's how we got that 25-minute run.
It's definitely faster with hard-coded machines to run on. Still not 25 minutes, but I think that was only possible because we didn't test lime-haxelib.zip. @joshtynjala, does this look good to merge?
The
-eval
option means most of the CI tasks no longer have to wait for all the others. If we run more of them in parallel, users will be able to get much quicker feedback on their pull requests.