Closed reubeno closed 2 years ago
For an example of what this looks like, you can check out: https://github.com/reubeno/ubdsrv/actions/runs/3454687651 I'm eager to get your feedback on the approach taken here.
With regard to using mkosi
, I was inspired by how systemd
uses this tool in their CI flows to build and test systemd components on a variety of distros; in this case, it allowed quickly putting together an image that could be run in qemu
, have the necessary prerequisites, and would have a new enough and correctly configured (pre-built) kernel to test ubdsrv
.
For an example of what this looks like, you can check out: https://github.com/reubeno/ubdsrv/actions/runs/3454687651 I'm eager to get your feedback on the approach taken here.
With regard to using
mkosi
, I was inspired by howsystemd
uses this tool in their CI flows to build and test systemd components on a variety of distros; in this case, it allowed quickly putting together an image that could be run inqemu
, have the necessary prerequisites, and would have a new enough and correctly configured (pre-built) kernel to testubdsrv
.
Just take a quick look, it does work, and really cool!
Can you share me how to run it in my github account? Once I learn the steps and verify it working in my tree, I will merge this PR.
Thanks!
Can you share me how to run it in my github account? Once I learn the steps and verify it working in my tree, I will merge this PR.
You'll want to go through this page and review the repository settings that it references: https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository
As for trying to run it yourself, it may be easiest for you to take my PR branch and create a copy of the branch in your repo (e.g., workflow-copy
) -- and edit the YAML file in that branch to list the branch's name everywhere that master
is presently listed (i.e., [ "master " ]
=> [ "master", "workflow-copy" ]
). There may be an easier way, but I'm assuming once you do that, it should automatically just run, and you should be able to see it under the 'Actions' button:
For an example of what this looks like, you can check out: https://github.com/reubeno/ubdsrv/actions/runs/3454687651 I'm eager to get your feedback on the approach taken here.
With regard to using
mkosi
, I was inspired by howsystemd
uses this tool in their CI flows to build and test systemd components on a variety of distros; in this case, it allowed quickly putting together an image that could be run inqemu
, have the necessary prerequisites, and would have a new enough and correctly configured (pre-built) kernel to testubdsrv
.
The CI you created has been working on the branch of 'github_ci' in my tree, that looks really great!
BTW ,do you know how the pre-built kernel is selected for the fedora qemu image? From test result, the kernel enables lots of debug options, so perf data is pretty bad. I know the Fedora 38 cloud image does enable these debug options.
Also is the following setting for preparing to build previously built ublk files into fedora image?
Can we pick upstream kernel somewhere(github) or even the kernel we built?
Anyway, I will push this PR now.
Thanks,
Thanks for merging!
As for performance, there's a few considerations. First, there are constraints of the GitHub-hosted runners. The only supported Linux distro for them is Ubuntu, and the latest version (Ubuntu 22.04) doesn't have a new enough kernel -- nor do I believe we have any way to select an alternate kernel. As a result, the only way I was able to get things to run at all was to run QEMU as an emulator inside the GitHub runner's VM. And because KVM isn't enabled for the GitHub runners, it's not properly accelerated. In other words, it's already going to be quite a bit of overhead from the start. Unless GitHub adds support for newer kernels/images, or enables KVM support, the only alternative option I see is to run a self-hosted runner--but that comes with needing to provision, pay for, and maintain the VM and runner.
If you ignore all that, we could probably figure out how to run a newer or alternate kernel. (It looks like Fedora Rawhide already has a 6.1 rc kernel, for example -- at least that would bring in latest kernel code.) Is there a prebuilt kernel that you know of that you think would work best? (For what it's worth, you can see the Fedora kernels available here: https://packages.fedoraproject.org/pkgs/kernel/kernel/ )
Also is the following setting for preparing to build previously built ublk files into fedora image?
Actually, that's just cloning the ubdsrv git repo. (It's not cloned by default in the workflow.) I was thinking of using those prebuilt binaries, but they were built on Ubuntu 22.04 and they're not likely to run on Fedora.
Thanks for merging!
As for performance, there's a few considerations. First, there are constraints of the GitHub-hosted runners. The only supported Linux distro for them is Ubuntu, and the latest version (Ubuntu 22.04) doesn't have a new enough kernel -- nor do I believe we have any way to select an alternate kernel. As a result, the only way I was able to get things to run at all was to run QEMU as an emulator inside the GitHub runner's VM. And because KVM isn't enabled for the GitHub runners, it's not properly accelerated. In other words, it's already going to be quite a bit of overhead from the start. Unless GitHub adds support for newer kernels/images, or enables KVM support, the only alternative option I see is to run a self-hosted runner--but that comes with needing to provision, pay for, and maintain the VM and runner.
If you ignore all that, we could probably figure out how to run a newer or alternate kernel. (It looks like Fedora Rawhide already has a 6.1 rc kernel, for example -- at least that would bring in latest kernel code.) Is there a prebuilt kernel that you know of that you think would work best? (For what it's worth, you can see the Fedora kernels available here: https://packages.fedoraproject.org/pkgs/kernel/kernel/ )
Only Rawhide enables ublk driver, but Rawhide kernel is still slow, since it enables lots of debug options at default, but debug options will be removed as time going. So I guess we can just use Rawhide kernel package.
Or simply build one VM from Rawhide cloud image? That is the way I run my routine/local test. And just several lines of script can build a cloud VM. And the thing is to how to integrate the built Fedora cloud VM into github CI.
Also is the following setting for preparing to build previously built ublk files into fedora image?
Actually, that's just cloning the ubdsrv git repo. (It's not cloned by default in the workflow.) I was thinking of using those prebuilt binaries, but they were built on Ubuntu 22.04 and they're not likely to run on Fedora.
But I didn't see any building log in the job of 'test/acquire ubdsrv', and I am wondering what the ublksrv binary is when running ublk test on Fedora VM.
Thanks,
Add build job that compiles the project in Ubuntu 22.04. Future changes could add steps here for more static analysis.
Add test job that, in parallel:
Tweak build_with_liburing_src to support:
Add a few more error checks to run_tests.sh
Signed-off-by: reuben olinsky reubeno.dev@gmail.com