armbian / build

Armbian Linux build framework generates custom Debian or Ubuntu image for x86, aarch64, riscv64 & armhf
https://www.armbian.com
GNU General Public License v2.0
4.01k stars 2.26k forks source link

Shorten build/test cycle times #374

Closed faddat closed 8 years ago

faddat commented 8 years ago

Okay, so I've made some progress on the cloud-compilation front. Her's what's going on at the moment:

How does that sound? if it sounds good, then I can wire up the API needed to support that flow.

Does anyone know how I can wire this behavior into the pull requests? Eg: So that when the pull request comes in, the chain of events is started.

igorpecovnik commented 8 years ago

When the build is complete, images are moved away from Google or even deleted ;)

We need to know if our building system is operational - on each commit might not be necessary but - once per day, once every 4 hours or so? If it fails, where - link to build log. Images as a product of this batch are of less importance. We can build them on demand.

Less Google is better than more ;)

zador-blood-stained commented 8 years ago

In a more ideal world, which I am trying to create, GCE launches 132 instances. Currently, I am limited to eight, but I am talking with someone at google about making it somehow possible for me to launch all 132 of them.

In a more ideal world before launching anything you check what was changed - you probably don't want to rebuild anything if somebody updated documentation or changelog.

When the build is complete, it is moved to google cloud storage.

And? Building images by manual request may be useful if anybody is willing to test them on real hardware or if new kernel/u-boot version was released and you want to be sure that kernel patches don't break compilation, other than that you can't be sure that freshly built image even boots.

Does anyone know how I can wire this behavior into the pull requests? Eg: So that when the pull request comes in, the chain of events is started.

This?

faddat commented 8 years ago

Re: The more ideal world:

Agreed, that makes sense.

Re: Manual Builds:

I'm honestly not prepared for that. I realize it's not far from what I will be doing, but I fear that this would cause demand on the servers to explode. It's not going to be small as things are now.

Re: manual builds (rationale) #2:

You're right about the boards. In my mind there are really just three options:

1) Automate this functionality using FEL and ..... _____!!!!!! (It would be Hard, not to mention that the servers aren't even tangible ones, which would reduce difficulty somewhat.) 2) Create a community feedback mechanism. 3) Virtualize the boards in QEMU. In terms of booting the rootfses, no problem. In terms of testing U-boot and what have you, ???;.

Thanks for your feedback!

zador-blood-stained commented 8 years ago

I'm honestly not prepared for that. I realize it's not far from what I will be doing, but I fear that this would cause demand on the servers to explode. It's not going to be small as things are now.

I meant that only developers may "request" an image.

Automate this functionality using FEL and ..... _____!!!!!! (It would be Hard, not to mention that the servers aren't even tangible ones, which would reduce difficulty somewhat.)

Difficult to implement and difficult to automate interpreting FEL boot results; works only for sunxi boards - which we have more than enough for testing

Virtualize the boards in QEMU. In terms of booting the rootfses, no problem. In terms of testing U-boot and what have you, ???;.

Most problems (I mean serious problems) are caused by u-boot or kernel, booting clean rootfs won't provide useful feedback other than that Ubuntu/Debian maintainers are doing their jobs 😄

faddat commented 8 years ago

Sorry, I wasn't specific enough in #3:

We can test the kernel. I don't know if we can test u-boot.

Jacob Gadikian E-mail: faddat@gmail.com SKYPE: faddat Phone/SMS: +84 167 789 6421

On Fri, Jun 24, 2016 at 2:33 AM, Mikhail notifications@github.com wrote:

I'm honestly not prepared for that. I realize it's not far from what I will be doing, but I fear that this would cause demand on the servers to explode. It's not going to be small as things are now.

I meant that only developers may "request" an image.

Automate this functionality using FEL and ..... _____!!!!!! (It would be Hard, not to mention that the servers aren't even tangible ones, which would reduce difficulty somewhat.)

Difficult to implement and difficult to automate interpreting FEL boot results; works only for sunxi boards - which we have more than enough for testing

Virtualize the boards in QEMU. In terms of booting the rootfses, no problem. In terms of testing U-boot and what have you, ???;.

Most problems (I mean serious problems) are caused by u-boot or kernel, booting clean rootfs won't provide useful feedback other than that Ubuntu/Debian maintainers are doing their jobs 😄

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/igorpecovnik/lib/issues/374#issuecomment-228159603, or mute the thread https://github.com/notifications/unsubscribe/AGz6iZuRjcTM2faBP0WVBRIqAFO9865Cks5qOt-kgaJpZM4I8kCT .

faddat commented 8 years ago

Re: Less google is better than more -

Does anyone have thoughts on how to do this with less google? TBH I haven't figured out a way that doesn't use a major cloud provider, other than waiting for ~1 day per build. I'm entirely open to suggestions, though.

The only thing that really comes to mind is Scaleway. I do love scaleway.

ThomasKaiser commented 8 years ago

other than waiting for ~1 day per build

We already discussed changes to the build process that would allow to build one image per $LINUXFAMILY and does image customization as a last step. Currently rebuilding a whole OS image if all stuff is cached just takes a few minutes. So to test out one specific build on a host where everything's in place (that might require some disk space!) isn't an issue (eg. 5 minutes)

With this "one build per $LINUXFAMILY" approach time to create all 13 sun7i images might then decrease from one hour to 10 minutes:

 1 cubox
 1 marvell
 1 neo
 1 odroidc1
 1 odroidc2
 1 odroidxu4
 2 pine64
 2 s500
 3 sun4i
 1 sun5i
 1 sun6i
13 sun7i
10 sun8i
 1 udoo

But to be honest: While this cloud stuff is somewhat interesting to check whether/when builds fail I still don't see that much how we could benefit from it given the great work already done to speed up subsequent image builds and the 'use case' (testing the more exotic boards)

igorpecovnik commented 8 years ago

Than I propose we drop the whole idea ( it was mine - apologize for troubles @faddat ) and focus into code optimizations to cut down the time needed for rebuild. When we will properly pack a desktop upgrade package, we don't need to build desktop versions and move this decision to the first login: "do you want to upgrade into desktop"?

I recently received some faster SSD drive as donation (thanks @Carsten Menke) and my build server also gained some compilation speed. With all this, we are on few hours, for images rebuild and probably less then 30 minutes for all kernels rebuilt.

damien7851 commented 8 years ago

Hello, just an idea: why not use a ramdisk? at the start copy all source except ".git" apply patch compile create debs on disk

zador-blood-stained commented 8 years ago

Ramdisk for kernel and u-boot sources? Possible, but with fast storage (SSD and right filesystem options) CPU still will be the bottleneck for most configurations, not to mention sources copy/checkout time and ccache probably won't be happy about changed paths and file creation times.

For example, mainline kernel directory size before compilation is 690MB and after compilation - 1.1GB, transfer time from SSD to ramdisk (for directory before compilation) is ~9s on my build host.

faddat commented 8 years ago

@igorpecovnik

what CPU are you running in that build server?

igorpecovnik commented 8 years ago

@faddat i7 4790S

zador-blood-stained commented 8 years ago

Closing this, discussion of shortening build time can be continued here